Jetson因为是基于arm的与传统基于x86的主机或服务器的模型部署略有差别,但基本类似,主要分为三步
首先可以将pytorch或其他框架训练好的模型转换为onnx格式用于后续的部署。pytorch中有onnx类可以将模型直接导出为onnx格式,以yolov6为例,用法如下:
torch.onnx.export(model, img, f, verbose=False, opset_version=13,
training=torch.onnx.TrainingMode.EVAL,
do_constant_folding=True,
input_names=['images'],
output_names=['num_dets', 'det_boxes', 'det_scores', 'det_classes']
if args.end2end else ['outputs'],
dynamic_axes=dynamic_axes)
使用以下命令可以将模型导出为onnx格式:
python export_onnx.py --weights ./outputs/yolov6.pth
更多配置,可以参考yolov6导出中的相关参数
完成以上后,可以得到onnx格式的模型文件。
在Jetson上安装arm版的TensorRT后可以使用trtexec将onnx模型文件生成推理引擎,具体安装方法可以参考 ,安装完成后在 /usr/src/ 下会有一个/tensorrt文件夹, 使用以下命令,可以生成推理引擎:
/usr/src/tensorrt/bin/trtexec --onnx=yolov6.onnx --fp16 --workspace=4096 --saveEngine=yolov6-fp16.engine
执行结束后会保存一个推理引擎,并且得到类似如下的性能结果报告:
[11/12/2022-17:04:55] [I] === Performance summary ===
[11/12/2022-17:04:55] [I] Throughput: 47.1886 qps
[11/12/2022-17:04:55] [I] Latency: min = 21.2761 ms, max = 21.8617 ms, mean = 21.4982 ms, median = 21.4636 ms, percentile(99%) = 21.7831 ms
[11/12/2022-17:04:55] [I] Enqueue Time: min = 1.04932 ms, max = 2.34424 ms, mean = 1.34331 ms, median = 1.27527 ms, percentile(99%) = 2.00671 ms
[11/12/2022-17:04:55] [I] H2D Latency: min = 0.408203 ms, max = 0.515991 ms, mean = 0.436084 ms, median = 0.427795 ms, percentile(99%) = 0.515137 ms
[11/12/2022-17:04:55] [I] GPU Compute Time: min = 20.8496 ms, max = 21.4231 ms, mean = 21.045 ms, median = 21.0016 ms, percentile(99%) = 21.3416 ms
[11/12/2022-17:04:55] [I] D2H Latency: min = 0.0112305 ms, max = 0.019043 ms, mean = 0.0170788 ms, median = 0.0170898 ms, percentile(99%) = 0.0185547 ms
[11/12/2022-17:04:55] [I] Total Host Walltime: 3.05159 s
[11/12/2022-17:04:55] [I] Total GPU Compute Time: 3.03048 s
[11/12/2022-17:04:55] [I] Explanations of the performance metrics are printed in the verbose logs.
[11/12/2022-17:04:55] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=yolov6.onnxh --fp16 --workspace=4096 --saveEngine=yolov6-fp16.engine
在上一步中使用TensorRT得到推理引擎后,可以使用Triton进行进一步的部署。 Jetson版的Triton Server安装可以参考 Triton Inference Server Support for Jetson and JetPack
安装完成后,配置模型即可完成部署,更多信息可参考 Triton Model Configuration Documentation,在本次部署中设置简单配置过程如下:
# Create folder structure
$ mkdir -p triton-deploy/models/pe/1/
$ touch triton-deploy/models/pe/config.pbtxt
# Place model
$ mv pe-fp16.engine triton-deploy/models/pe/1/model.plan
执行以上命令后,配置文件的目录格式如下:
$ tree triton-deploy/
triton-deploy/
└── models
└── pe
├── 1
│ └── model.plan
└── config.pbtxt
3 directories, 2 files
可以构建好模型文件的目录和配置文件,目录格式如上。并将上一步中得到的推理引擎复制到相应目录下作为model.plan, 同时需要配置config.pbtxt, 一个简单的示例配置文件config.pbtxt可以设置如下:
name: "pe"
platform: "tensorrt_plan"
max_batch_size: 1
dynamic_batching { }
完成以上配置后,即可使用tritonserver进行部署:
./tritonserver2.27.0-jetpack5.0.2/bin/tritonserver --model-repository=triton-deploy/models --backend-directory=/home/nvidia/Downloads/tritonserver2.27.0-jetpack5.0.2/backends --backend-config=tensorrt,version=8
部署成功后结果如如下图所示,可以使用pytritonclient访问8001端口进行推理推理调用
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
| pe | 1 | READY |
+-------+---------+--------+
W1121 10:34:52.037176 47570 metrics.cc:354] No polling metrics (CPU, GPU, Cache) are enabled. Will not poll for them.
I1121 10:34:52.037548 47570 tritonserver.cc:2264]
+----------------------------------+------------------------------------------------------------------+
| Option | Value |
+----------------------------------+------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.27.0 |
| server_extensions | classification sequence model_repository model_repository(unload |
| | _dependents) schedule_policy model_configuration system_shared_m |
| | emory cuda_shared_memory binary_tensor_data statistics trace log |
| | ging |
| model_repository_path[0] | triton-deploy/models |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| response_cache_byte_size | 0 |
| min_supported_compute_capability | 5.3 |
| strict_readiness | 1 |
| exit_timeout | 30 |
+----------------------------------+------------------------------------------------------------------+
I1121 10:34:52.044596 47570 grpc_server.cc:4819] Started GRPCInferenceService at 0.0.0.0:8001
I1121 10:34:52.045703 47570 http_server.cc:3474] Started HTTPService at 0.0.0.0:8000
I1121 10:34:52.088336 47570 http_server.cc:181] Started Metrics Service at 0.0.0.0:8002
以上,完成了模型在Jetson NX上的部署工作。