YoloV5+DeepSort+TensorRT 目标检测、跟踪

文章发布已转个人主页:wangcong.net

更新(Update !):

对DeepSORT中的特征提取部分进行了加速!
P.S.C++测试版本项目地址:https://github.com/cong/yolov5_deepsort_tensorrt_cpp


介绍

项目通过 YOLOv5DeepSORT 来实现了目标检测、跟踪算法,其中基于TensorRTX 实现了模型从 PyTorchTensorRT 转换,进一步将代码部署 NVIDIA Jetson Xavier NX 中。

项目在 NVIDIA Jetson Xavier NXX86 平台 都可以正常运行。

项目地址:https://github.com/cong/yolov5_deepsort_tensorrt

环境依赖

  1. the X86 architecture:
    • Ubuntu20.04 or 18.04 with CUDA 10.0 and cuDNN 7.6.5
    • TensorRT 7.0.0.1
      - PyTorch 1.7.1_cu11.0, TorchVision 0.8.2+cu110, TorchAudio 0.7.2
    • OpenCV-Python 4.2
    • pycuda 2021.1
  2. the NVIDIA embedded system:
    • Ubuntu18.04 with CUDA 10.2 and cuDNN 8.0.0
    • TensorRT 7.1.3.0
      - PyTorch 1.8.0 and TorchVision 0.9.0
    • OpenCV-Python 4.1.1
    • pycuda 2020.1

测试速度

the X86 architecture with RTX 2080Ti :

Networks Without TensorRT With TensorRT
YOLOV5 14ms / 71FPS / 1239M 10ms / 100FPS / 2801M
YOLOV5 + DeepSort 23ms / 43FPS / 1276M 16ms / 62FPS / 2842M

NVIDIA Jetson Xavier NX:

Networks Without TensorRT With TensorRT
YOLOV5 \ 43ms / 23FPS / 1397M
YOLOV5 + DeepSort \ 163ms / 6FPS / 3241M

DeepSort 的速度取决于画面中目标的数目,

推理过程

  1. 下载项目

    git clone https://github.com/cong/yolov5_deepsort_tensorrt.git
    
  2. 执行

    python demo.py
    

模型转换

转换 PyTorch yolov5 模型 到 TensorRT engine.
温馨提示:本项目使用了的 YOLOv5 version 4.0,TensorRTX 需要严格使用 TensorRTX version yolov5-v4.0。

  1. ***.pt文件转换成***.wts 文件。

    git clone -b v5.0 https://github.com/ultralytics/yolov5.git
    git clone https://github.com/wang-xinyu/tensorrtx.git
    # download https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt
    cp {tensorrtx}/yolov5/gen_wts.py {ultralytics}/yolov5
    cd {ultralytics}/yolov5
    python gen_wts.py yolov5s.pt
    # a file 'yolov5s.wts' will be generated.
    
  2. 编译并生成***.engine文件。

    cd {tensorrtx}/yolov5/
    # update CLASS_NUM in yololayer.h if your model is trained on custom dataset
    mkdir build
    cd build
    cp {ultralytics}/yolov5/yolov5s.wts {tensorrtx}/yolov5/build
    cmake ..
    make
    # serialize model to plan file
    sudo ./yolov5 -s [.wts] [.engine] [s/m/l/x/s6/m6/l6/x6 or c/c6 gd gw]
    # deserialize and run inference, the images in [image folder] will be processed.
    sudo ./yolov5 -d [.engine] [image folder]
    # For example yolov5s
    sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
    sudo ./yolov5 -d yolov5s.engine ../samples
    # For example Custom model with depth_multiple=0.17, width_multiple=0.25 in yolov5.yaml
    sudo ./yolov5 -s yolov5_custom.wts yolov5.engine c 0.17 0.25
    sudo ./yolov5 -d yolov5.engine ../samples
    

3.当生成 _zidane.jpg 和_bus.jpg两张图片后,表示可以运行成功。

自定义

  1. 训练好自己的模型.
  2. 模型转换到engine格式(TensorRTX’s version must same as YOLOV5’s
    version).
    替换掉项目中 ***.enginelibmyplugins.so .

其他

  • 厚着脸 求 Github Star!
  • For more information you can visit the Blog.

你可能感兴趣的:(Deep,Learning,人工智能,深度学习)