以下的模型基于ssd_mobilenet_v2模型,jetson-inference
使用 tensorflow object detection api重新训练模型
链接: https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10#2-set-up-tensorflow-directory-and-anaconda-virtual-environment.
将训练得到的模型应用于jetson-inference项目(链接如下),重要的在于将模型pb文件转换成tensorRT所支持的uff文件
https://github.com/dusty-nv/jetson-inference.
模型转换参考链接: https://github.com/AastaNV/TRT_object_detection.
该链接是英伟达开源的模型转换目录,按照步骤执行,执行完main.py将会生成temp.uff文件
其中要想能生成在jetson-inference项目中能够被成功构建模型推理引擎的temp.uff文件关键在于正确配置
TRT_object_detection/config/model_ssd_mobilenet_v2_coco_2018_03_29.py文件中的参数及相关定义见以下三点:
1.修改参数featureMapShapes=[19, 10, 5, 3, 2, 1]
这个tensor张量是根据模型训练ssd_mobilenet_v2_coco.config中的输入图片大小参数决定的,以如下300*300为例
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
若 image_resizer 参数改变了,可根据如下文本获得特征图大小,并修改featureMapShapes参数
#获取特征图大小
import sys
import tensorflow as tf
from object_detection.anchor_generators.multiple_grid_anchor_generator import create_ssd_anchors
from object_detection.models.ssd_mobilenet_v2_feature_extractor_test import SsdMobilenetV2FeatureExtractorTest
feature_extractor = SsdMobilenetV2FeatureExtractorTest()._create_feature_extractor(
depth_multiplier=1, pad_to_multiple=1,)
image_batch_tensor = tf.zeros([1, int(sys.argv[1]), int(sys.argv[2]), 1])
print([tuple(feature_map.get_shape().as_list()[1:3])
for feature_map in feature_extractor.extract_features(image_batch_tensor)])
2.model_ssd_mobilenet_v2_coco_2018_03_29.py的参数numClasses
设数据集种类为n,此处的参数numClasses=n+1
3.model_ssd_mobilenet_v2_coco_2018_03_29.py缺少GridAnchor节点的输入元素定义
解决方案:
定义一个常量输入张量,并将其设置为GridAnchor节点的输入
#Create a constant Tensor and set it as input for GridAnchor_TRT
data = np.array([1, 1], dtype=np.float32)
anchor_input = gs.create_node(“AnchorInput”, “Const”, value=data)
graph.append(anchor_input)
graph.find_nodes_by_op(“GridAnchor_TRT”)[0].input.insert(0, “AnchorInput”)
return graph
修改好的model_ssd_mobilenet_v2_coco_2018_03_29.py完整代码如下.
// An highlighted block
# Minds.ai , 2019
# SSD Mobilenet V2 configuration file
import graphsurgeon as gs
import numpy as np
import tensorflow as tf
path = 'model/ssd_mobilenet_v2_coco_2018_03_29_tuned/frozen_inference_graph.pb'
TRTbin = 'TRT_ssd_mobilenet_v2_coco_2018_03_29_tuned.bin'
output_name = ['NMS']
dims = [3,300,300]
layout = 7
def add_plugin(graph):
all_assert_nodes = graph.find_nodes_by_op("Assert")
graph.remove(all_assert_nodes, remove_exclusive_dependencies=True)
all_identity_nodes = graph.find_nodes_by_op("Identity")
graph.forward_inputs(all_identity_nodes)
Input = gs.create_plugin_node(
name="Input",
op="Placeholder",
shape=[1, 3, 300, 300]
)
PriorBox = gs.create_plugin_node(
name="GridAnchor",
op="GridAnchor_TRT",
minSize=0.2,
maxSize=0.95,
aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
variance=[0.1,0.1,0.2,0.2],
featureMapShapes=[19, 10, 5, 3, 2, 1],
numLayers=6
)
NMS = gs.create_plugin_node(
name="NMS",
op="NMS_TRT",
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=4,
inputOrder=[0, 2, 1],
confSigmoid=1,
isNormalized=1
)
concat_priorbox = gs.create_node(
"concat_priorbox",
op="ConcatV2",
axis=2
)
concat_box_loc = gs.create_plugin_node(
"concat_box_loc",
op="FlattenConcat_TRT",
dtype=tf.float32,
axis=1,
ignoreBatch=0
)
concat_box_conf = gs.create_plugin_node(
"concat_box_conf",
op="FlattenConcat_TRT",
dtype=tf.float32,
axis=1,
ignoreBatch=0
)
namespace_plugin_map = {
"MultipleGridAnchorGenerator": PriorBox,
"Postprocessor": NMS,
"Preprocessor": Input,
"ToFloat": Input,
"Cast": Input,
"image_tensor": Input,
"Concatenate": concat_priorbox,
"concat": concat_box_loc,
"concat_1": concat_box_conf
}
graph.collapse_namespaces(namespace_plugin_map)
graph.remove(graph.graph_outputs, remove_exclusive_dependencies=False)
graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input")
# Create a constant Tensor and set it as input for GridAnchor_TRT
data = np.array([1, 1], dtype=np.float32)
anchor_input = gs.create_node("AnchorInput", "Const", value=data)
graph.append(anchor_input)
graph.find_nodes_by_op("GridAnchor_TRT")[0].input.insert(0, "AnchorInput")
return graph
接着将运行main.py生成的temp.uff放入jetson-inference相应目录中,能被正确构建并生成模型推理引擎
以上修改model_ssd_mobilenet_v2_coco_2018_03_29.py参考的链接:https://www.minds.ai/post/deploying-ssd-mobilenet-v2-on-the-nvidia-jetson-and-nano-platforms.