mmdetection 是商汤科技(2018 COCO 目标检测挑战赛冠军)和香港中文大学开源的基于Pytorch实现的深度学习目标检测工具箱,性能强大,运算效率高,配置化编程,比较容易训练、测试。但pytorch模型不易于部署,运算速度还有进一步提升的空间,当前比较有效的方法是将模型转换为行为相同的tensorrt模型,本文记录转换流程。
Support Model/Module转换mmdetection 的 pytorch模型到tensorrt模型有多种方法,本文使用 mmdetection-to-tensorrt 库作为核心,完成直接的模型转换。
该库跳过了通常的 pth -> onnx -> tensorrt 的转换步骤,直接从pth转成tensorrt模型,并且已经成功支持了很多mmdetection 的模型转换。
本机 gpu Nvidia GTX 1080 服务器
时间 2021.03
操作系统 Ubuntu 16.04
Nvidia 显卡驱动 460.39
Cuda 版本 11.1
Cudnn 版本 8.1.1
具体配置方法教程很多,在此不再赘述,需要根据个人具体情况配置
选择的版本是 tensorrt 7.2.3.4
建议Python环境安装 Anaconda
pip install pycuda
下载tensorrt
链接 https://developer.nvidia.com/zh-cn/tensorrt
选择 TensorRT-7.2.3.4.Ubuntu-16.04.x86_64-gnu.cuda-11.1.cudnn8.1.tar.gz
解压
tar zxfv TensorRT-7.2.3.4.Ubuntu-16.04.x86_64-gnu.cuda-11.1.cudnn8.1.tar.gz
解压后文件夹内文件:
# ls TensorRT-Release-Notes.pdf bin data doc graphsurgeon include lib onnx_graphsurgeon python samples targets uff
根据自己的 Python 版本选择合适的包进行安装
cd TensorRT-7.2.3.4/python
pip install tensorrt-7.2.3.4-cp37-none-linux_x86_64.whl
cd TensorRT-7.2.3.4/graphsurgeon
pip install graphsurgeon-0.4.5-py2.py3-none-any.whl
export PATH=$PATH:/usr/local/cuda/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/lib64
export PATH=$PATH:"your_path_to_TensorRT-7.2.3.4"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"your_path_to_TensorRT-7.2.3.4/lib"
python
import tensorrt
tensorrt.__version__
--> '7.2.3.4'
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
git clone [email protected]:open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e . # or "python setup.py develop"
git clone [email protected]:grimoire/torch2trt_dynamic.git
cd torch2trt_dynamic
python setup.py develop
git clone --depth=1 [email protected]:grimoire/amirstan_plugin.git
cd amirstan_plugin
git submodule update --init --progress --depth=1
讲道理一句话就可以了,不过我在执行这句命令时报错,如果没报错继续下面的步骤
子模块更新报错解决方案http协议不好用,需要改成git
amirstan_plugin/.gitmodules
文件将第三行地址改为
[email protected]:NVIDIA/cub.git
[submodule "third_party/cub"]
path = third_party/cub
url = [email protected]:NVIDIA/cub.git
branch = 1.8.0
amirstan_plugin/.git/modules/third_party/cub/config
将
remote "origin"
地址改为[email protected]:NVIDIA/cub.git
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
worktree = ../../../../third_party/cub
[remote "origin"]
url = [email protected]:NVIDIA/cub.git
fetch = +refs/heads/main:refs/remotes/origin/main
[branch "main"]
remote = origin
merge = refs/heads/main
git submodule update --init --progress --depth=1
mkdir build
cd build
cmake -DTENSORRT_DIR=${your_path_to_tensorrt} ..
若输出:
-- Found TensorRT headers at ../TensorRT-7.2.3.4/include
-- Find TensorRT libs at ../TensorRT-7.2.3.4/lib/libnvinfer.so; ../TensorRT-7.2.3.4/lib/libnvparsers.so; ../TensorRT-7.2.3.4/lib/libnvinfer_plugin.so
-- Found TENSORRT: ../TensorRT-7.2.3.4/include
-- WITH_DEEPSTREAM: false
-- GPU_ARCHS is not defined. Generating CUDA code for default SMs: 35;53;61;70;75;80
-- Configuring done
-- Generating done
-- Build files have been written to: ../amirstan_plugin/build
则说明 makefile 生成成功,保存在
build
文件夹下
make -j10
此时在build/lib
文件夹下生成了很多文件
# ls
libadaptivePoolPlugin_static.a libcarafeFeatureReassemblePlugin_static.a libexViewPlugin_static.a liblayerNormPlugin_static.a libroiPoolPlugin_static.a libtorchEmbeddingPlugin_static.a
libamir_cuda_util.a libdeformableConvPlugin_static.a libgridAnchorDynamicPlugin_static.a libmeshGridPlugin_static.a libtorchBmmPlugin_static.a libtorchFlipPlugin_static.a
libamirstan_plugin.so libdeformablePoolPlugin_static.a libgridSamplePlugin_static.a librepeatDimsPlugin_static.a libtorchCumMaxMinPlugin_static.a libtorchGatherPlugin_static.a
libbatchedNMSPlugin_static.a libdelta2bboxPlugin_static.a libgroupNormPlugin_static.a libroiExtractorPlugin_static.a libtorchCumPlugin_static.a libtorchNMSPlugin_static.a
export AMIRSTAN_LIBRARY_PATH=<amirstan_plugin_root>/build/lib
进入
mmdetection-to-tensorrt
根目录
python setup.py develop
# pip show mmdet2trt
-->
Name: mmdet2trt
Version: 0.3.0
Summary: mmdetection to tensorrt converter
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /workspace/nfs/tensorrt_test/mmdetection-to-tensorrt
Requires:
Required-by:
在 mmdetection-to-tensorrt
项目中,运行 demo
文件夹下的 inference.py
文件
修改inference.py
文件中的 parser 参数 :
配置好后运行该文件即可以生成模型在测试图像上的检测结果
https://github.com/grimoire/mmdetection-to-tensorrt
https://zhuanlan.zhihu.com/p/165359425