Jetson Nano | DeepStream部署Yolov5(Pytorch模型-->wts 文件-->TensorRT模型)

文章目录

    • 环境
    • 安装YOLOv5 (主机)
    • 安装DeepStream(Nano上)
    • 将 PyTorch 模型转换为 wts 文件【yolov5s.wts】(主机)
    • 将wts文件转换为TensorRT模型【yolov5s.engine】(Nano)
    • 使用DeepStream部署yolov5s
    • 部署测试

详见:https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5-5.0.md

环境

Matplotlib(用于 Jetson 平台)

sudo apt-get install python3-matplotlib

PyTorch(用于 Jetson 平台)

wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
pip3 install Cython
pip3 install numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl

TorchVision(适用于 Jetson 平台)

sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision
cd torchvision
export BUILD_VERSION=0.9.0
python3 setup.py install --user

安装YOLOv5 (主机)

git clone https://github.com/ultralytics/yolov5.git   (我用的最新版v5.0)

权重文件yolov5s.pt/yolov5m.pt/yolov5l.pt/yolov5x.pt放置在weights文件夹下

建议使用torch1.7以上版本。
这里主要是yolov5s版本,因为小。。。。。。。

安装DeepStream(Nano上)

执行以下命令,安装需要的软件包:

sudo apt install \
libssl1.0.0 \
libgstreamer1.0-0 \
gstreamer1.0-tools \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
libgstrtspserver-1.0-0 \
libjansson4=2.11-1

下载DeepStream SDK deepstream_sdk_v5.1.0_jetson.tbz2

输入以下命令以提取并安装DeepStream SDK:

sudo tar -xvf deepstream_sdk_v5.1.0_jetson.tbz2 -C /
cd /opt/nvidia/deepstream/deepstream-5.1
sudo ./install.sh
sudo ldconfig

测试

执行命令:(编辑文件prebuild.sh,注释掉除yolov3-tiny的语句)

cd /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo

执行命令:(下载yolov3-tiny.cfg和yolov3-tiny.weights)

./prebuild.sh

执行命令:

deepstream-app -c deepstream_app_config_yoloV3_tiny.txt

安装完后在/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo会有一个部署yolo的官方实例代码,但只有yolov3的。

将 PyTorch 模型转换为 wts 文件【yolov5s.wts】(主机)

git clone https://github.com/wang-xinyu/tensorrtx.git

将文件tensorrtx/yolov5/gen_wts.py 复制到yolov5文件夹下。
cd 进入yolov5文件夹下
执行命令生成 yolov5s.wts 文件

python gen_wts.py weights/yolov5s.pt

将wts文件转换为TensorRT模型【yolov5s.engine】(Nano)

每次修改参数后都需要重新生成engine

克隆tensorrtx文件夹至nano板子上。

cd tensorrtx/yolov5
mkdir build
cd build
cmake ..
make

copy文件’yolov5s.wts’ 文件到tensorrtx/yolov5/build目录下
生成yolov5s.engine

sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
#sudo ./yolov5 -s [.wts] [.engine] [s/m/l/x/s6/m6/l6/x6 or c/c6 gd gw]

到这一步,我们便通过tensorrt生成了基于C++的engine部署引擎文件(测试两张图片)

sudo ./yolov5 -d yolov5s.engine ../samples  

注意:默认情况下,yolov5 脚本生成batch size = 1 和FP16 模式的模型。如果要更改此参数,请在编译前编辑 yolov5.cpp 文件。

#define USE_FP16  // set USE_INT8 or USE_FP16 or USE_FP32
#define DEVICE 0  // GPU id
#define NMS_THRESH 0.4
#define CONF_THRESH 0.5
#define BATCH_SIZE 1

如果你是通过修改每一层卷积的宽度和深度来达到自定义的模型的话,就可以直接使用下面的命令来达成生成引擎.

sudo ./yolov5 -s yolov5_custom.wts yolov5.engine c 0.17 0.25

我一般缩小宽度和深度来达到剪裁模型,加快运行速度的作用。这一api优化让我的调试十分的方便。
v5.0的tensorrtx库更新适配了yolov5的p6模型,如果你用的是p6模型训练,可以使用下面的命令。

sudo ./yolov5 -s ../yolov5s.wts yolov5s.engine s6 
sudo ./yolov5 -s yolov5_custom.wts yolov5.engine c6 0.17 0.25

如果需要使用自己训练的模型
修改tensorrtx/yolov5/yololayer.h
static constexpr int CLASS_NUM = 1; 修改自己的类型数量,原来是80
在tensorrtx/yolov5/目录下
编译代码
mkdir build0 #可以再建一个文件夹 ,和yolov5s区别开来
cd build0
cmake …
make
后面步骤一样

注意: by default, yolov5 script generate model with batch size = 1, FP16 mode and s model.(我使用的S网络,不需要修改)

#define USE_FP16 // comment out this if want to use FP32

#define DEVICE 0 // GPU id

#define NMS_THRESH 0.4

#define CONF_THRESH 0.5

#define BATCH_SIZE 1

#define NET s // s m l x

如果你需要改变上述默认参数,可在编译前修改文件 yolov5.cpp,还可改变yololayer.h中的参数。

static constexpr int CLASS_NUM = 80;

static constexpr int INPUT_H = 608;

static constexpr int INPUT_W = 608;

使用DeepStream部署yolov5s

先下载DeepStream-Yolo工程

git clone https://github.com/marcoslucianops/DeepStream-Yolo.git

编译 nvdsinfer_custom_impl_Yolo
拷贝yolov5 deepstream文件,即将下载的DeepStream-Yolo/external/yolov5-5.0 文件夹并将文件移动到sources下

sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/
cd /opt/nvidia/deepstream/deepstream-5.1/sources/
cp ..DeepStream-Yolo/external/yolov5-5.0  /opt/nvidia/deepstream/deepstream-5.1/sources/

拷贝yolov5.engine

cp /home/666/tensorrtx/yolov5/build/yolov5s.engine /opt/nvidia/deepstream/deepstream-5.1/sources/yolov5-5.0 

编译
针对自己的数据集,修改config_infer_primary.txt中的参数,以及类别名称文件labels.txt。

num-detected-classes=80  # 自己的类别数
labels:fire  # 自己的类别

修改后执行:

cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolov5-5.0
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

测试模型
使用我编辑过的deepstream_app_config.txt和config_infer_primary.txt文件在我的 external/yolov5-5.0 文件夹中可用

运行命令

deepstream-app -c deepstream_app_config.txt

注意:根据选择的模型,编辑 config_infer_primary.txt 文件

例如,如果您使用 YOLOv5x

model-engine-file=yolov5s.engine

model-engine-file=yolov5x.engine

要更改 NMS_THRESH,请编辑 nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp 文件并重新编译

#define kNMS_THRESH 0.45

要更改 CONF_THRESH,请编辑 config_infer_primary.txt 文件

[class-attrs-all]
pre-cluster-threshold=0.25

部署测试

测试视频文件推理

cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolov5-5.0
deepstream-app -c deepstream_app_config.txt

修改测试视频的路径,在deepstream_app_config.txt文档中:
Jetson Nano | DeepStream部署Yolov5(Pytorch模型-->wts 文件-->TensorRT模型)_第1张图片
Jetson Nano | DeepStream部署Yolov5(Pytorch模型-->wts 文件-->TensorRT模型)_第2张图片
python版视频预测从之前的5fps提升到了20fps,人眼看不出差别了,基本可以做到实时。python yolov5_trt.py
c++部署的视频预测载入模型与预测都比python快,25fps

自己训练的火焰检测:
Jetson Nano | DeepStream部署Yolov5(Pytorch模型-->wts 文件-->TensorRT模型)_第3张图片

参考:

https://zhuanlan.zhihu.com/p/296314513

https://blog.csdn.net/hahasl555/article/details/116500763?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162948266716780264096806%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=162948266716780264096806&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_ecpm_v1~hot_rank-2-116500763.pc_search_similar&utm_term=jetson+nano+yolov5&spm=1018.2226.3001.4449

https://blog.csdn.net/ailaier/article/details/116270962?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162963902316780357273353%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=162963902316780357273353&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-2-116270962.pc_search_result_control_group&utm_term=yolov5+jetson+nano&spm=1018.2226.3001.4187

https://zhuanlan.zhihu.com/p/259539097

https://blog.csdn.net/jinfagang1211/article/details/109689084?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162965292416780262569995%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=162965292416780262569995&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_ecpm_v1~hot_rank-13-109689084.pc_search_similar&utm_term=jetson+nano+%E9%83%A8%E7%BD%B2yolov5&spm=1018.2226.3001.4187

https://blog.csdn.net/IamYZD/article/details/119618950?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162965292416780262569995%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=162965292416780262569995&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_ecpm_v1~hot_rank-22-119618950.pc_search_similar&utm_term=jetson+nano+%E9%83%A8%E7%BD%B2yolov5&spm=1018.2226.3001.4187

https://blog.csdn.net/qq_38495194/article/details/115906666?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162965292416780262569995%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=162965292416780262569995&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_ecpm_v1~hot_rank-24-115906666.pc_search_similar&utm_term=jetson+nano+%E9%83%A8%E7%BD%B2yolov5&spm=1018.2226.3001.4187

你可能感兴趣的:(Jetson,nano,自动驾驶,深度学习,pytorch)