基于Ubuntu 18.04.3操作系统的Intel OpenVINO环境搭建

OpenVINO toolkit分为开源版与Intel版,其中Intel版是Intel发布的专注于推理的深度学习框架,其特点是可将TensorFlow、caffe、ONNX模型转换为Intel系硬件兼容的模型,包括Movidius与Movidius NCS 2。
基于Ubuntu 18.04.3操作系统的Intel OpenVINO环境搭建_第1张图片
官方安装教程:https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html

目录

  • 注册与下载
  • 安装
    • 安装OpenVINO toolkit核心组件
    • 安装其他相关依赖
    • 设置环境变量
      • 临时修改
      • 写入bashrc或zshrc
    • 配置Model Optimizer
      • 配置步骤
        • 配置TensorFlow
    • 运行验证脚本以验证安装
      • 运行图像分类验证脚本
      • 运行推理管道验证脚本
    • 配置神经网络加速棒
      • 配置
      • 测试

注册与下载

地址:https://software.intel.com/en-us/openvino-toolkit/choose-download
基于Ubuntu 18.04.3操作系统的Intel OpenVINO环境搭建_第2张图片
选择Linux版
基于Ubuntu 18.04.3操作系统的Intel OpenVINO环境搭建_第3张图片
点击注册与下载
你会收到一封邮件,里面有你的激活码与下载地址

安装

安装OpenVINO toolkit核心组件

# 解压缩
>>> tar -xzvf l_openvino_toolkit_p_2020.1.023.tgz
>>> cd l_openvino_toolkit_p_2020.1.023
# GUI安装界面与CLI安装界面
# 这里选择CLI安装界面
>>> sudo ./install.sh
Welcome
--------------------------------------------------------------------------------
Welcome to the Intel® Distribution of OpenVINO™ toolkit 2020.1 for Linux*
--------------------------------------------------------------------------------
The Intel installation wizard will install the Intel® Distribution of OpenVINO™
toolkit 2020.1 for Linux* to your system.

The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and
solutions that emulate human vision. Based on Convolutional Neural Networks
(CNN), the toolkit extends computer vision (CV) workloads across Intel®
hardware, maximizing performance. The Intel Distribution of OpenVINO toolkit
includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

Before installation please check system requirements:
https://docs.openvinotoolkit.org/2020.1/_docs_install_guides_installing_openvino
_linux.html#system_requirements
and run following script to install external software dependencies:

sudo -E ./install_openvino_dependencies.sh

Please note that after the installation is complete, additional configuration
steps are still required.

For the complete installation procedure, refer to the Installation guide:
https://docs.openvinotoolkit.org/2020.1/_docs_install_guides_installing_openvino
_linux.html.

You will complete the following steps:
   1.  Welcome
   2.  End User License Agreement
   3.  Prerequisites
   4.  Configuration
   5.  Installation
   6.  First Part of Installation is Complete

--------------------------------------------------------------------------------
Press "Enter" key to continue or "q" to quit: 
>>> Enter
* Other names and brands may be claimed as the property of others
--------------------------------------------------------------------------------
Type "accept" to continue or "decline" to go back to the previous menu: 
>>> accept
--------------------------------------------------------------------------------

   1. I consent to the collection of my Information
   2. I do NOT consent to the collection of my Information

   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection: 
>>> 2
--------------------------------------------------------------------------------
Missing optional prerequisites
-- Intel® GPU is not detected on this machine
-- Intel® Graphics Compute Runtime for OpenCL™ Driver is missing but you will
be prompted to install later
--------------------------------------------------------------------------------
   1. Skip prerequisites [ default ]
   2. Show the detailed info about issue(s)
   3. Re-check the prerequisites

   h. Help
   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection or press "Enter" to accept default choice [ 1 ]: 
>>> 1
Configuration > Pre-install Summary
--------------------------------------------------------------------------------
Install location:
    /opt/intel


The following components will be installed:
    Inference Engine                                                       272MB
        Inference Engine Development Kit                                    63MB
        Inference Engine Runtime for Intel® CPU                             25MB
        Inference Engine Runtime for Intel® Processor Graphics              17MB
        Inference Engine Runtime for Intel® Movidius™ VPU                  78MB
        Inference Engine Runtime for Intel® Gaussian Neural Accelerator      5MB
        Inference Engine Runtime for Intel® Vision Accelerator Design with  15MB
Intel® Movidius™ VPUs

    Model Optimizer                                                          4MB
        Model Optimizer Tool                                                 4MB

    Deep Learning Workbench                                                178MB
        Deep Learning Workbench                                            178MB

    OpenCV*                                                                118MB
        OpenCV* Libraries                                                  107MB

    Open Model Zoo                                                         117MB
        Open Model Zoo                                                     117MB

    Intel(R) Media SDK                                                     128MB
        Intel(R) Media SDK                                                 128MB

   Install space required:  668MB

--------------------------------------------------------------------------------

   1. Accept configuration and begin installation [ default ]
   2. Customize installation

   h. Help
   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection or press "Enter" to accept default choice [ 1 ]: 
>>> 1
Prerequisites > Missing Prerequisite(s)
--------------------------------------------------------------------------------
There are one or more unresolved issues based on your system configuration and
component selection.

You can resolve all the issues without exiting the installer and re-check, or
you can exit, resolve the issues, and then run the installation again.

--------------------------------------------------------------------------------
Missing optional prerequisites
-- Intel® GPU is not detected on this machine
-- Intel® Graphics Compute Runtime for OpenCL™ Driver is missing but you will
be prompted to install later
--------------------------------------------------------------------------------
   1. Skip prerequisites [ default ]
   2. Show the detailed info about issue(s)
   3. Re-check the prerequisites

   h. Help
   b. Back
   q. Quit installation

--------------------------------------------------------------------------------
Please type a selection or press "Enter" to accept default choice [ 1 ]: 
>>> 1
First Part of Installation is Complete
--------------------------------------------------------------------------------
The first part of Intel® Distribution of OpenVINO™ toolkit 2020.1 for Linux*
has been successfully installed in 
/opt/intel/openvino_2020.1.023.

ADDITIONAL STEPS STILL REQUIRED: 

Open the Installation guide at:
 https://docs.openvinotoolkit.org/2020.1/_docs_install_guides_installing_openvin
o_linux.html 
and follow the guide instructions to complete the remaining tasks listed below:

 • Set Environment variables 
 • Configure Model Optimizer 
 • Run the Verification Scripts to Verify Installation and Compile Samples

--------------------------------------------------------------------------------
Press "Enter" key to quit: 

安装其他相关依赖

>>> cd /opt/intel/openvino/install_dependencies
>>> sudo -E ./install_openvino_dependencies.sh
# 将会使用apt安装一系列包

设置环境变量

临时修改

>>> source /opt/intel/openvino/bin/setupvars.sh

写入bashrc或zshrc

>>> cd ~
>>> vim .zshrc
source /opt/intel/openvino/bin/setupvars.sh

写入bashrc有一个弊端,就是会将原先安装的opencv覆盖,因而我建议在使用OpenVINO时再临时修改

配置Model Optimizer

为了更好理解Model Optimizer,这里直接翻译官方文档。

*Model Optimizer是一个基于Python的命令行工具,旨在导入从流行的深度学习框架训练得到的模型,例如Caffe、TensorFlow、Apache MXNet、ONNX、Kaldi。
Model Optimizer是Intel OpenVINO toolkit的关键组件。已训练好的模型无法在没有使用Model Optimizer转换的前提下进行推理。当你使用Model Optimizer对训练好的模型进行转换后,你会得到模型的Intermediate Representation(IR),Intermediate Representation通过以下两种格式文件描述整个模型:

  • .xml:描述网络拓扑
  • .bin:包含所有权重、偏置的二进制格式*

配置步骤

须知:

  1. 可以将所有支持的框架一并配置,或者按需进行配置;
  2. 不支持在CentOS上配置对TensorFlow的支持,原因是TensorFlow不支持CentOS;
  3. 配置过程需要网络链接

配置TensorFlow

>>> cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
>>> sudo ./install_prerequisites_tf.sh

如果在多次安装删除tensorflow或tensorflow-gpu后,出现找不到包的情况,尤其是找不到tensorflow-gpu,可通过强制全部重新安装:

>>> pip3 install tensorflow-gpu==1.15.2 --ignore-installed

参考:https://stackoverflow.com/a/45551934/7151777

运行验证脚本以验证安装

运行图像分类验证脚本

该脚本将会下载SqueezeNet模型,再使用Model Optimizer转换为IR模型,再使用该模型对car.png进行图像分类,输出Top-10

>>> cd /opt/intel/openvino/deployment_tools/demo
>>> ./demo_squeezenet_download_convert_run.sh
target_precision = FP16
[setupvars.sh] OpenVINO environment initialized


###################################################



Downloading the Caffe model and the prototxt
Installing dependencies
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease       
Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]                                     
Hit:4 http://dl.google.com/linux/chrome/deb stable Release                                                                           
Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease                                                         
Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
Hit:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease        
Hit:9 http://ppa.launchpad.net/peek-developers/stable/ubuntu bionic InRelease 
Hit:10 http://ppa.launchpad.net/transmissionbt/ppa/ubuntu bionic InRelease    
Fetched 177 kB in 2s (109 kB/s)
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
Run sudo -E apt -y install build-essential python3-pip virtualenv cmake libcairo2-dev libpango1.0-dev libglib2.0-dev libgtk2.0-dev libswscale-dev libavcodec-dev libavformat-dev libgstreamer1.0-0 gstreamer1.0-plugins-base

Reading package lists... Done
Building dependency tree       
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
libgtk2.0-dev is already the newest version (2.24.32-1ubuntu1).
virtualenv is already the newest version (15.1.0+ds-1.1).
cmake is already the newest version (3.10.2-1ubuntu2.18.04.1).
gstreamer1.0-plugins-base is already the newest version (1.14.5-0ubuntu1~18.04.1).
libcairo2-dev is already the newest version (1.15.10-2ubuntu0.1).
libglib2.0-dev is already the newest version (2.56.4-0ubuntu0.18.04.4).
libgstreamer1.0-0 is already the newest version (1.14.5-0ubuntu1~18.04.1).
libpango1.0-dev is already the newest version (1.40.14-1ubuntu0.1).
libavcodec-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libavformat-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libswscale-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
python3-pip is already the newest version (9.0.1-2.3~ubuntu1.18.04.1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libpng-dev is already the newest version (1.6.34-1ubuntu0.18.04.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
WARNING: The directory '/home/microfat/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 1)) (3.12)
Requirement already satisfied: requests in /home/microfat/.local/lib/python3.6/site-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.22.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2018.1.18)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.6)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (1.22)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (3.0.4)
Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name squeezenet1.1 --output_dir /home/microfat/openvino_models/models --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt from the cache

========== Retrieving /home/microfat/openvino_models/models/public/squeezenet1.1/squeezenet1.1.caffemodel from the cache

################|| Post-processing ||################

========== Replacing text in /home/microfat/openvino_models/models/public/squeezenet1.1/squeezenet1.1.prototxt


Target folder /home/microfat/openvino_models/ir/public/squeezenet1.1/FP16 already exists. Skipping IR generation  with Model Optimizer.If you want to convert a model again, remove the entire /home/microfat/openvino_models/ir/public/squeezenet1.1/FP16 folder. Then run the script again



###################################################

Build Inference Engine samples

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include stdint.h
-- Looking for C++ include stdint.h - found
-- Looking for C++ include sys/types.h
-- Looking for C++ include sys/types.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for strtoll
-- Looking for strtoll - found
-- Found InferenceEngine: /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.1") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/microfat/inference_engine_samples_build
Scanning dependencies of target gflags_nothreads_static
Scanning dependencies of target format_reader
[  9%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_completions.cc.o
[ 18%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags_reporting.cc.o
[ 27%] Building CXX object thirdparty/gflags/CMakeFiles/gflags_nothreads_static.dir/src/gflags.cc.o
[ 36%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/bmp.cpp.o
[ 45%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/MnistUbyte.cpp.o
[ 54%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/format_reader.cpp.o
[ 63%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/opencv_wraper.cpp.o
[ 72%] Linking CXX shared library ../../intel64/Release/lib/libformat_reader.so
[ 72%] Built target format_reader
[ 81%] Linking CXX static library ../../intel64/Release/lib/libgflags_nothreads.a
[ 81%] Built target gflags_nothreads_static
Scanning dependencies of target classification_sample_async
[ 90%] Building CXX object classification_sample_async/CMakeFiles/classification_sample_async.dir/main.cpp.o
[100%] Linking CXX executable ../intel64/Release/classification_sample_async
[100%] Built target classification_sample_async


###################################################

Run Inference Engine classification sample

Run ./classification_sample_async -d CPU -i /opt/intel/openvino/deployment_tools/demo/car.png -m /home/microfat/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml

[ INFO ] InferenceEngine: 
	API version ............ 2.1
	Build .................. 37988
	Description ....... API
[ INFO ] Parsing input parameters
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car.png
[ INFO ] Creating Inference Engine
	CPU
	MKLDNNPlugin version ......... 2.1
	Build ........... 37988

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ INFO ] Start inference (10 asynchronous executions)
[ INFO ] Completed 1 async request execution
[ INFO ] Completed 2 async request execution
[ INFO ] Completed 3 async request execution
[ INFO ] Completed 4 async request execution
[ INFO ] Completed 5 async request execution
[ INFO ] Completed 6 async request execution
[ INFO ] Completed 7 async request execution
[ INFO ] Completed 8 async request execution
[ INFO ] Completed 9 async request execution
[ INFO ] Completed 10 async request execution
[ INFO ] Processing output blobs

Top 10 results:

Image /opt/intel/openvino/deployment_tools/demo/car.png

classid probability label
------- ----------- -----
817     0.6853030   sports car, sport car
479     0.1835197   car wheel
511     0.0917197   convertible
436     0.0200694   beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
751     0.0069604   racer, race car, racing car
656     0.0044177   minivan
717     0.0024739   pickup, pickup truck
581     0.0017788   grille, radiator grille
468     0.0013083   cab, hack, taxi, taxicab
661     0.0007443   Model T

[ INFO ] Execution successful

[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool


###################################################

Demo completed successfully.

如果运行过程中出现各种连接超时('ReadTimeoutError("HTTPSConnectionPool(host=‘pypi.org’, port=443)),尝试修改源、或者使用p104

运行推理管道验证脚本

该脚本将会下载三个训练好的IR模型,脚本流程为:首先使用汽车检测模型检测出汽车;再将输出输入至车型检测模型,检测出汽车特征,包括车颜色与车牌;再将输出输入至车牌识别模型,识别出车牌字符。

>>> ./demo_security_barrier_camera.sh
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease       
Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]          
Hit:4 http://dl.google.com/linux/chrome/deb stable Release                                                
Hit:6 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease                               
Hit:7 http://ppa.launchpad.net/peek-developers/stable/ubuntu bionic InRelease                       
Hit:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease                        
Hit:9 http://ppa.launchpad.net/transmissionbt/ppa/ubuntu bionic InRelease
Get:10 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
Fetched 177 kB in 2s (94.5 kB/s)                                
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
Run sudo -E apt -y install build-essential python3-pip virtualenv cmake libcairo2-dev libpango1.0-dev libglib2.0-dev libgtk2.0-dev libswscale-dev libavcodec-dev libavformat-dev libgstreamer1.0-0 gstreamer1.0-plugins-base

Reading package lists... Done
Building dependency tree       
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
libgtk2.0-dev is already the newest version (2.24.32-1ubuntu1).
virtualenv is already the newest version (15.1.0+ds-1.1).
cmake is already the newest version (3.10.2-1ubuntu2.18.04.1).
gstreamer1.0-plugins-base is already the newest version (1.14.5-0ubuntu1~18.04.1).
libcairo2-dev is already the newest version (1.15.10-2ubuntu0.1).
libglib2.0-dev is already the newest version (2.56.4-0ubuntu0.18.04.4).
libgstreamer1.0-0 is already the newest version (1.14.5-0ubuntu1~18.04.1).
libpango1.0-dev is already the newest version (1.40.14-1ubuntu0.1).
libavcodec-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libavformat-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
libswscale-dev is already the newest version (7:3.4.6-0ubuntu0.18.04.1).
python3-pip is already the newest version (9.0.1-2.3~ubuntu1.18.04.1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libpng-dev is already the newest version (1.6.34-1ubuntu0.18.04.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
WARNING: The directory '/home/microfat/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 1)) (3.12)
Requirement already satisfied: requests in /home/microfat/.local/lib/python3.6/site-packages (from -r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.22.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (1.22)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->-r /opt/intel/openvino/deployment_tools/demo/../open_model_zoo/tools/downloader/requirements.in (line 2)) (2018.1.18)
[setupvars.sh] OpenVINO environment initialized


###################################################

Downloading Intel models

target_precision = FP16
Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name vehicle-license-plate-detection-barrier-0106 --output_dir /home/microfat/openvino_models/ir --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32-INT8/vehicle-license-plate-detection-barrier-0106.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP32-INT8/vehicle-license-plate-detection-barrier-0106.bin from the cache

################|| Post-processing ||################

Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name license-plate-recognition-barrier-0001 --output_dir /home/microfat/openvino_models/ir --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32/license-plate-recognition-barrier-0001.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32/license-plate-recognition-barrier-0001.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32-INT8/license-plate-recognition-barrier-0001.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP32-INT8/license-plate-recognition-barrier-0001.bin from the cache

################|| Post-processing ||################

Run python3 /opt/intel/openvino_2020.1.023/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name vehicle-attributes-recognition-barrier-0039 --output_dir /home/microfat/openvino_models/ir --cache_dir /home/microfat/openvino_models/cache

################|| Downloading models ||################

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32/vehicle-attributes-recognition-barrier-0039.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32/vehicle-attributes-recognition-barrier-0039.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.bin from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32-INT8/vehicle-attributes-recognition-barrier-0039.xml from the cache

========== Retrieving /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP32-INT8/vehicle-attributes-recognition-barrier-0039.bin from the cache

################|| Post-processing ||################



###################################################

Build Inference Engine demos

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include stdint.h
-- Looking for C++ include stdint.h - found
-- Looking for C++ include sys/types.h
-- Looking for C++ include sys/types.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for C++ include stddef.h
-- Looking for C++ include stddef.h - found
-- Check size of uint32_t
-- Check size of uint32_t - done
-- Looking for strtoll
-- Looking for strtoll - found
-- Found OpenCV: /opt/intel/openvino_2020.1.023/opencv (found version "4.2.0") found components:  core imgproc 
-- Found InferenceEngine: /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/lib/intel64/libinference_engine.so (Required is at least version "2.0") 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/microfat/inference_engine_demos_build
[ 40%] Built target gflags_nothreads_static
[ 80%] Built target monitors
[100%] Built target security_barrier_camera_demo


###################################################

Run Inference Engine security_barrier_camera demo

Run ./security_barrier_camera_demo -d CPU -d_va CPU -d_lpr CPU -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /home/microfat/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_lpr /home/microfat/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -m_va /home/microfat/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml

[ INFO ] InferenceEngine: 0x7f4e28e1e040
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car_1.bmp
[ INFO ] Loading device CPU
	CPU
	MKLDNNPlugin version ......... 2.1
	Build ........... 37988

[ INFO ] Loading detection model to the CPU plugin
[ INFO ] Loading Vehicle Attribs model to the CPU plugin
[ INFO ] Loading Licence Plate Recognition (LPR) model to the CPU plugin
[ INFO ] Number of InferRequests: 1 (detection), 3 (classification), 3 (recognition)
[ INFO ] 4 streams for CPU
[ INFO ] Display resolution: 1920x1080
[ INFO ] Number of allocated frames: 3
[ INFO ] Resizable input with support of ROI crop and auto resize is disabled
0.2FPS for (3 / 1) frames
Detection InferRequests usage: 0.0%

[ INFO ] Execution successful


###################################################

Demo completed successfully.

基于Ubuntu 18.04.3操作系统的Intel OpenVINO环境搭建_第4张图片
首次运行会下载模型,如果出现模型文件下载速度过慢,可通过p104加速,但在后期会报错:

>>> p104 ./demo_security_barrier_camera.sh
CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
  Some of mandatory Inference Engine components are not found.  Please
  consult InferenceEgnineConfig.cmake module's help page.  (missing:
  IE_RELEASE_LIBRARY IE_C_API_RELEASE_LIBRARY IE_NN_BUILDER_RELEASE_LIBRARY)
  (Required is at least version "2.0")
Call Stack (most recent call first):
  /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
  /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/share/InferenceEngineConfig.cmake:99 (find_package_handle_standard_args)
  CMakeLists.txt:213 (find_package)


CMake Error at CMakeLists.txt:213 (find_package):
  Found package configuration file:

    /opt/intel/openvino_2020.1.023/deployment_tools/inference_engine/share/InferenceEngineConfig.cmake

  but it set InferenceEngine_FOUND to FALSE so package "InferenceEngine" is
  considered to be NOT FOUND.


-- Configuring incomplete, errors occurred!
See also "/home/microfat/inference_engine_demos_build/CMakeFiles/CMakeOutput.log".
Error on or near line 188; exiting with status 1

因而可通过p104下载完模型后,再去掉p104重新执行脚本。

配置神经网络加速棒

配置

添加当前用户到users组

>>> sudo usermod -a -G users "$(whoami)"

注销重新登录以生效
安装USB规则

>>> sudo cp /opt/intel/openvino/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/
>>> sudo udevadm control --reload-rules
>>> sudo udevadm trigger
>>> sudo ldconfig

重启以生效

测试

使用我在之前写的Movidius + Raspberry Pi实时目标检测中的代码进行测试,代码地址:https://github.com/MacwinWin/raspberry_pi_object_detection.git

# clone
>>> git clone https://github.com/MacwinWin/raspberry_pi_object_detection.git
>>> cd raspberry_pi_object_detection
# checkout到最新版本分支
>>> git checkout 1.2
>>> python3 openvino_video_object_detection.py --prototxt MobileNetSSD_deploy.prototxt --model MobileNetSSD_deploy.caffemodel --video /home/pi/Git/raspberry_pi_object_detection/airbus.mp4 --device NCS
[INFO] loading model...
[INFO] starting video stream...
Traceback (most recent call last):
  File "openvino_video_object_detection.py", line 55, in <module>
    frame = imutils.resize(frame, width=400)
  File "/home/microfat/.local/lib/python3.6/site-packages/imutils/convenience.py", line 69, in resize
    (h, w) = image.shape[:2]
AttributeError: 'NoneType' object has no attribute 'shape'

如果出现上述错误,则是因为opencv没有切换到openvino版的opencv,此时只需通过设置环境变量,临时改变opencv版本即可

>>> source /opt/intel/openvino/bin/setupvars.sh

再重新运行,效果如下如

你可能感兴趣的:(深度学习)