Openvino 模型文件部署推理

一、模型转换流程:

(以mxnet框架在openvino上的转换为例)

1、安装相关openvino toolkit: (以镜像的方式为例)


sudo docker search openvino

Openvino 模型文件部署推理_第1张图片

docker pull cortexica/openvino:latest
docker run -i -t cortexica/openvino:latest /bin/bash
 
cd /opt/intel/openvino/deployment_tools/model_optimizer
pip3 install -r requirements_mxnet.txt

2、将相关mxnet模型文件及网络结构参数文件放到同一目录下

在这里插入图片描述
3、模型转换

python3 mo_mxnet.py --input_shape [1,3,112,112] --input_model model-0000.params --output_dir openvino_models

cd openvino_models/

在这里插入图片描述
二、openvino推理流程:(ubuntu16.04, openvino-2020.3为例)

1、设置环境变量:

vi ~/.bashrc
source <INSTALL_DIR>/bin/setupvars.sh
source ~/.bashrc
#如果想要设置具体版本的python
PYTHONPATH=<INSTALL_DIR>/deployment_tools/inference_engine/python_api/<desired_python_version

2、导入IE/OpenCV/numpy/time模块

from openvino.inference_engine import IECore, IENetwork
import cv2
import numpy as np
from time import time

3、配置推断计算设备,IR文件路径,图片(视频)路径

DEVICE = 'CPU'
model_xml = 'model-0000.xml'
model_bin = 'model-0000.bin'
image_file = 'test.jpg'
cpu_extension_lib = "/inference_engine/bin/intel64/Release/cpu_extension.dll"
labels_map = ["fake", "cat", "dog"]

4、初始化插件


ie = IECore()

5、读取IR模型文件

net = IENetwork(model=model_xml, weights=model_bin)

6、准备输入输出张量

print("Preparing input blobs")
input_blob = next(iter(net.inputs))
out_blob = next(iter

7、载入模型到AI推断计算设备

print("Loading IR to the plugin...")
exec_net = ie.load_network(network=net, num_requests=1, device_name=DEVICE)

8、读入图片,执行推断计算

n, c, h, w = net.inputs[input_blob].shape
frame = cv2.imread(image_file)
initial_h, initial_w, channels = frame.shape
# 按照AI模型要求放缩图片
image = cv2.resize(frame, (w, h))
# 按照AI模型要求将图像数据结构从HWC转为CHW
image = image.transpose((2, 0, 1))
print("Batch size is {}".format(n))
print("Starting inference in synchronous mode")
start = time()
res = exec_net.infer(inputs={input_blob: image})
end = time()
print("Infer Time:{}ms".format((end - start) * 1000))

9、处理输出,显示结果

print("Processing output blob")
res = res[out_blob]
print(res)
for obj in res:
    print(obj)
    # 当信心指数大于0.7时,显示检测结果
    if obj[2] > 0.7:
        xmin = int(obj[3] * initial_w)
        ymin = int(obj[4] * initial_h)
        xmax = int(obj[5] * initial_w)
        ymax = int(obj[6] * initial_h)
        class_id = int(obj[1])
        # 显示信心指数,物体标签和边界框
        color = (0, 255, 0) if class_id > 1 else (255, 0, 0)
        cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color, 2)
        det_label = labels_map[class_id] if labels_map else str(class_id)
        cv2.putText(frame, det_label + ' ' + str(round(obj[2] * 100, 1)) + ' %', (xmin, ymin - 7),
                    cv2.FONT_HERSHEY_COMPLEX, 0.6, color, 1)
print("Inference is completed")
cv2.imshow("Detection results", frame)
cv2.waitKey(0)
cv2.destroyAllWindows()

你可能感兴趣的:(python,模型转换,openvino,intel,vpu)