针对yolov5网络输出的结果,对数据做后处理以输出目标检测结果
engine网络后处理
包含:
(1)传入一张图片转为需要的格式(参看:(63条消息) 对图像做前处理,以便按照yolov5的engine输入格式输入网络_爱吃油淋鸡的莫何的博客-CSDN博客)
(2)调用engine进行推理了
(3)对输出的后处理
(4)输出结果绘图
需要注意的是:
1 pytorch的pt文件转.onnx文件的时候涉及batchsize值,onnx2engine的时候也需要设置batchsize值,infer推理的时候也有batchsize参数。上述三处的batchsize的值需要一致,否则会出现【Cuda failure: 700 已放弃 (核心已转储)】的错误提示。
2 代码仅演示了一张固定图片的推理操作。
代码:
1 CMakeLists.txt
# CMakeLists.txt
cmake_minimum_required(VERSION 2.6)
project(yolo)
add_definitions(-std=c++11)
option(CUDA_USE_STATIC_CUDA_RUNTIME OFF)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_BUILD_TYPE Debug)
include_directories(${PROJECT_SOURCE_DIR}/include)
# include and link dirs of cuda and tensorrt, you need adapt them if yours are different
# cuda
include_directories(/usr/local/cuda-11.6/include)
link_directories(/usr/local/cuda-11.6/lib64)
# tensorrt
include_directories(/home/package/TensorRT-8.2.5.1/include/)
link_directories(/home/package/TensorRT-8.2.5.1/lib/)
include_directories(/home/package/TensorRT-8.2.5.1/samples/common/)
#link_directories(/home/package/TensorRT-8.2.5.1/lib/stubs/)
# opencv
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(yolo ${PROJECT_SOURCE_DIR}/postprocess.cpp)
target_link_libraries(yolo nvinfer)
target_link_libraries(yolo cudart)
target_link_libraries(yolo ${OpenCV_LIBS})
#如果onnx2engine则需要如下库
target_link_libraries(yolo /home/mec/hlj/package/TensorRT-8.2.5.1/lib/stubs/libnvonnxparser.so)
add_definitions(-O2 -pthread)
2 postprocess.cpp
// yolo_onnx2engine.cpp
// ref1 : https://blog.csdn.net/weixin_43863869/article/details/124614334
// ref2 : https://www.cnblogs.com/tangjunjun/p/16639361.html
#include "NvInfer.h"
#include "cuda_runtime_api.h"
#include
#include
#include
若有描述或理解错误,请指正,非常感谢!
相关代码链接:链接:https://pan.baidu.com/s/1IUPyVb15PpWpnJ4lrLVFNQ