1.加载SSD的gluon(0.5.0版本,比这个高的版本的ssd结构不一样,openvino_2019.3.334是不能直接转的,需要自己写不支持的层)模型,生成*.params与*.json文件
2.使用openvino提供的工具mo_mxnet.py进行转换
3.使用openvino进行前向传播(部署)
代码很简单,如下(数据输入我使用比较常见的(1,3,512,512)
,可以自行设置,同时下文需要)
from mxnet import nd
from gluoncv import model_zoo
weight_path = "./hardhat.params"
net = model_zoo.get_model("ssd_512_resnet50_v1_voc", pretrained=True)
net.hybridize()
data_shape = (1, 3, 512, 512)
input_data = nd.random.uniform(-1, 1, data_shape)
_ = net(input_data)
net.export("TestSSD")
mo_mxnet.py是openvino提供的工具,位置在
我使用的mac默认安装位置是/opt/intel/openvino
然后cd到第一步刚刚生成文件的路径下
使用命令:
python /opt/intel/openvino/deployment_tools/model_optimizer/mo_mxnet.py --input_model TestSSD-0000.params \
--input_shape [1,3,512,512] –-enable_ssd_gluoncv --data_type FP16
命令解释:
--input_model
这个不解释了,就是会读取刚刚生成的*.params与*.json文件
--input_shape
因为我们生成的.json文件内是没有指定网络输入的,所以这里我们要指定网络输入数据shape,我使用比较常见的 (1, 3, 512, 512)
此外我用zsh会报错,后来转bash成功运行
--enable_ssd_gluoncv
因为我们使用的gluoncv中的ssd包含了一些layer '_contrib_box_nms'
,openvino不支持,所以openvino使用DetectionOutput layer代替,通过enable_ssd_gluoncv开启该替换
--data_type
后面的运行代码需要FP16精度,你的准备转换的模型可以不需要FP16,mo_mxnet.py会自动帮你转
第二步完成
前向传播所经过的步骤:
'''
@Author: your name
@Date: 2020-01-18 13:15:51
@LastEditTime : 2020-01-18 13:33:02
@LastEditors : Please set LastEditors
@Description: In User Settings Edit
@FilePath: /Desktop/ie.py
'''
"""01. Predict with pre-trained SSD models
==========================================
This article shows how to play with pre-trained SSD models with only a few
lines of code.
First let's import some necessary libraries:
"""
from gluoncv import model_zoo, data, utils
from matplotlib import pyplot as plt
from openvino.inference_engine import ie_api as ie
import numpy as np
######################################################################
# Load a pretrained model
# -------------------------
#
# Let's get an SSD model trained with 512x512 images on Pascal VOC
# dataset with ResNet-50 V1 as the base model. By specifying
# ``pretrained=True``, it will automatically download the model from the model
# zoo if necessary. For more pretrained models, please refer to
# :doc:`../../model_zoo/index`.
gluoncv_net = model_zoo.get_model('ssd_512_resnet50_v1_voc', pretrained=True)
core = ie.IECore()
net = ie.IENetwork('ssd-fp32-0000.xml', 'ssd-fp32-0000.bin')
######################################################################
# Pre-process an image
# --------------------
#
# Next we download an image, and pre-process with preset data transforms. Here we
# specify that we resize the short edge of the image to 512 px. But you can
# feed an arbitrarily sized image.
#
# You can provide a list of image file names, such as ``[im_fname1, im_fname2,
# ...]`` to :py:func:`gluoncv.data.transforms.presets.ssd.load_test` if you
# want to load multiple image together.
#
# This function returns two results. The first is a NDArray with shape
# `(batch_size, RGB_channels, height, width)`. It can be fed into the
# model directly. The second one contains the images in numpy format to
# easy to be plotted. Since we only loaded a single image, the first dimension
# of `x` is 1.
im_fname = utils.download('https://github.com/dmlc/web-data/blob/master/' +
'gluoncv/detection/street_small.jpg?raw=true',
path='street_small.jpg')
x, img = data.transforms.presets.ssd.load_test(im_fname, short=512)
print('Shape of pre-processed image:', x.shape)
######################################################################
# Inference and display
# ---------------------
#
# The forward function will return all detected bounding boxes, and the
# corresponding predicted class IDs and confidence scores. Their shapes are
# `(batch_size, num_bboxes, 1)`, `(batch_size, num_bboxes, 1)`, and
# `(batch_size, num_bboxes, 4)`, respectively.
#
# We can use :py:func:`gluoncv.utils.viz.plot_bbox` to visualize the
# results. We slice the results for the first image and feed them into `plot_bbox`:
exec_net = core.load_network(net, device_name="MYRIAD", num_requests=1)
input_names = net.inputs.keys()
input_data = list(x.asnumpy())
assert(len(input_data) == len(input_names))
input = dict(zip(input_names, input_data))
exec_net.requests[0].infer(input)
res = exec_net.requests[0].outputs
box1 = res['1208/DetectionOutput_'] # 这个1208的参数可能需要修改,自行输出res查看就知道怎么改了
"""
bbox: box1[0,0,:,3:7]
scores: box1[0,0,:,2]
label: box1[0,0,:,1]
"""
ax = utils.viz.plot_bbox(img, box1[0,0,:,3:7], box1[0,0,:,2],
box1[0,0,:,1], class_names=gluoncv_net.classes)
plt.show()
DEVICE = "MYRIAD"
, 你可以用CPU,但貌似要装其他OPENCL啥子的东西info
的内容如下(数字代表位置): 0->当前结果所对应的图片索引(多张图片整合在一个Batch时会需要它,本文中就一张图片) 1->label的索引 2->置信度 3~6->原图中的左上右下坐标点(xmin, ymin, xmax, ymax)data.transforms.presets.ssd.load_test
以及画框的函数utils.viz.plot_bbox
自己写一个sudo su
-> python3 test.py
或者写一个sh脚本,然后在普通用户下用sudo运行它,比如下面这个#!/bin/bash
source /home/rainweic/intel/openvino/bin/setupvars.sh # 改你openvino安装的路径
python3 test.py #运行你的代码