【nvidia jetson xavier】Deepstream 自定义检测Yolo v5模型部署

Deepstream 自定义检测Yolo v5模型部署

依照四部署yolo v5 环境。

Convert PyTorch model to wts file

  1. Download repositories
git clone https://github.com/wang-xinyu/tensorrtx.git
git clone https://github.com/ultralytics/yolov5.git
  1. Download latest YoloV5 (YOLOv5s, YOLOv5m, YOLOv5l or YOLOv5x) weights to yolov5 folder (example for YOLOv5s)
wget https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt -P yolov5/
  1. Copy gen_wts.py file (from tensorrtx/yolov5 folder) to yolov5 (ultralytics) folder
cp tensorrtx/yolov5/gen_wts.py yolov5/gen_wts.py
  1. 复制检测检测Yolo v5 模型best.pth到yolov5 folder
  2. Generate wts file
cd yolov5
python3 gen_wts.py best.pt

yolov5s.wts file will be generated in yolov5 folder

【错误】No module named tqdm

pip install tqdm

【错误】No module named seaborn

pip3 install seaborn

因matplotlib无法正常安装导致seaborn无法安装,尝试:

python --version
python -m pip install seaborn

再次因matplotlib失败

尝试:

https://toptechboy.com/

https://blog.csdn.net/LYiiiiiii/article/details/119052823

sudo apt-get install python3-seaborn

再次执行

python3 gen_wts.py best.pt

【错误】no module named numpy.testing.nosetester

发生这种情况的原因是numpy和之间的版本不兼容scipynumpy在其最新版本中已弃用numpy.testing.nosetester

尝试

pip install scipy

【报错】No lapack/blas resources found

尝试https://blog.csdn.net/liulangdeshusheng/article/details/52433075

遇到的问题:
no lapack/blas resources found 
解决方法,安装lapack
sudo apt-get install liblapack-dev 
然后重新安装scipy,这次遇到了不一样的问题。


遇到的问题:

error: library dfftpack has Fortran sources but no Fortran compiler found 
解决方法,安装Fortran compiler:
sudo apt-get install gfortran

再次执行

python3 gen_wts.py best.pt

生成 best.wts

依照训练的Yolo v5参数对yololayer.h和yolov5.cpp进行修改

https://zhuanlan.zhihu.com/p/365191541

1.查看Yolo v5的训练参数:

https://www.icode9.com/content-3-774443.html

  • data文件夹下查看myvoc.yaml文件查找对应类别数

  • weights文件夹下查看预训练模型版本-比如yolov5m

2、修改yololayer.h和yolov5.cpp文件,主要修改对应的参数和我们训练时候保持一致,不然会报错

  • 对yololayer.h修改类别数

  • 对yolov5.cpp主要需要根据显存大小调试batchsize大小,一般设成1就可以

Convert wts file to TensorRT model

根据https://github.com/DanaHan/Yolov5-in-Deepstream-5.0的说明,在Build tensorrtx/yolov5之前还需要:

Important Note:

You should replace yololayer.cu and hardswish.cu file in tensorrtx/yolov5

  1. Build tensorrtx/yolov5
cd tensorrtx/yolov5
mkdir build
cd build
cmake ..
make
  1. Move generated yolov5s.wts file to tensorrtx/yolov5 folder (example for YOLOv5s)
cp yolov5/yolov5s.wts tensorrtx/yolov5/build/yolov5s.wts
  1. Convert to TensorRT model (yolov5s.engine file will be generated in tensorrtx/yolov5/build folder)
sudo ./yolov5 -s best.wts best.engine m
  1. Note: by default, yolov5 script generate model with batch size = 1 and FP16 mode.
#define USE_FP16  // set USE_INT8 or USE_FP16 or USE_FP32
#define DEVICE 0  // GPU id
#define NMS_THRESH 0.4
#define CONF_THRESH 0.5
#define BATCH_SIZE 1

Edit yolov5.cpp file before compile if you want to change this parameters.

We can get ‘best.engine’ and ‘libmyplugin.so’ here for the future use.

你可能感兴趣的:(边缘智能,Deepstream,自定义,Yolo,v5模型)