Intel OpenVINO配置和使用

背景

    最近的项目中,有客户提出在既有的CPU服务器上运行CNN的方案,即不添加nivida显卡已降低成本。因此调研Intel的OpenVINO方案。

    OpenVINO是Intel提供的给予卷积神经网络的计算机视觉开发包。目的在能够快速的在Intel的硬件方案上部署和开发计算机视觉工程和方案。OpenVINO支持多种Intel硬件方案,包括CPU、集成显卡、Intel Movidius算力棒以及FPGA等。

    官方给出的硬件和系统需求为:

Processors

  • 6th-8th Generation Intel® Core™
  • Intel® Xeon® v5 family
  • Intel® Xeon® v6 family
  • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
  • Intel Movidius NCS

Operating Systems

  • Ubuntu* 16.04 long-term support (LTS), 64-bit
  • CentOS* 7.4 or higher, 64-bit
  • Yocto Project* Poky Jethro* v2.0.3, 64-bit (for target only)

     以上的需求来说比较苛刻,可以看出加速模块是在Iris的显卡中,特别是在服务器端E系列的CPU中,基本来说只有E3-v5带有集成显卡的才能支持。本人的电脑是第四代i7,然而如果是采用存CPU运行的方式,大可以不必考虑显卡的要求。老一些的CPU测试也是可以支持,但是性能肯定会差。

部署

    测试环境 Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz 操作系统 Ubuntu 18.04

    官方Guide教程 https://software.intel.com/en-us/articles/OpenVINO-Install-Linux

    然而对18.04支持并不好,在安装过程中有较多的依赖包在18.04的source中已经被remove,导致依赖环境检测失败。于是决定直接采用docker的方式配置16.04LTS的系统。在查找docker images的时候意外发现,有人已经将配置好Intel OpenVINO的环境上传,欣然接受!

lyh@lyh-All-Series:~$ sudo docker search openvino
[sudo] password for lyh: 
NAME                              DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
cortexica/openvino                Ubuntu 16.04 image with Intel OpenVINO depen…   1                                       

    下载image,大约需要2.2GB的硬盘空间。运行容器:

sudo docker run -it cortexica/openvino /bin/bash

    OpenVINO的SDK路径位于:

    /opt/intel/

验证环境

    运行官方的Demo程序

root@0bf034642447:/opt/intel/computer_vision_sdk/deployment_tools/demo# ./demo_squeezenet_download_convert_run.sh

    脚本会自动下载和安装依赖包,并且下载测试Caffe模型将其转成OpenVINO的xml/bin文件,测试图片是一张轿车照片,运行分类结果如下:

[ INFO ] InferenceEngine: 
	API version ............ 1.2
	Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin

	API version ............ 1.2
	Build .................. lnx_20180510
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
	/root/openvino_models/ir/squeezenet1.1/squeezenet1.1.xml
	/root/openvino_models/ir/squeezenet1.1/squeezenet1.1.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Processing output blobs

Top 10 results:

Image /opt/intel/computer_vision_sdk/deployment_tools/demo/../demo/car.png

817 0.8363345 label sports car, sport car
511 0.0946488 label convertible
479 0.0419131 label car wheel
751 0.0091071 label racer, race car, racing car
436 0.0068161 label beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656 0.0037564 label minivan
586 0.0025741 label half track
717 0.0016069 label pickup, pickup truck
864 0.0012027 label tow truck, tow car, wrecker
581 0.0005882 label grille, radiator grille


total inference time: 3.4724500
Average running time of one iteration: 3.4724500 ms

Throughput: 287.9811121 FPS

[ INFO ] Execution successful

    Demo程序将图片转到227x227大小,并且进行分类。

Tensorflow模型使用

    实际的项目过程中我们的模型是采用tensorflow训练的,因此需要将其转换为OpenVINO的xml/bin模式

配置Tensorflow转换环境:

root@0bf034642447:/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/install_prerequisites# sudo ./install_prerequisites_tf.sh

转换模型:

./mo_tf.py --input_meta_graph /home/Project/Iv_v_149/pb_model/model.ckpt-3800000.meta  --output_dir model/

运行:

./classification_sample  -d CPU  -i car.png -m /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/model/model.ckpt-3800000.xml

结果:

[ INFO ] InferenceEngine: 
	API version ............ 1.2
	Build .................. 13911
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin

	API version ............ 1.2
	Build .................. lnx_20180510
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
	/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/model/model.ckpt-3800000.xml
	/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/model/model.ckpt-3800000.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (1920, 1080)
[ INFO ] Input size C: 1 W: 1920 H: 1080
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Output size: 4
[ INFO ] Output W: 1920 H: 1080
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference (1 iterations)
[ INFO ] Infer total: 482.018
[ INFO ] Processing output blobs

total inference time: 482.0178449
Average running time of one iteration: 482.0178449 ms

Throughput: 2.0746120 FPS

    实验来看纯CPU运行做图像增强速度还是偏慢,相同情况下采用GTX1070可以达到20fps以上。后续需要找环境实现OpenVINO在Iris显卡上运行实验。

你可能感兴趣的:(音视频)