ubuntu tensorRT5.1.5.0 sample yolov3转onnx转trt,pytorch转onnx转trt推理

tensorRT5.1.5.0安装

官网安装并解压:https://developer.nvidia.com/nvidia-tensorrt-5x-download
添加环境变量:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/wxy/TensorRT-5.1.5.0/lib

pip安装python版本,装在你的python环境下

cd TensorRT-5.1.5.0/python 
pip install tensorrt-5.1.5.0-cp36-none-linux_x86_64.whl
##验证是否成功
import tensorrt

yolov3转onnx转trt

yolov3转onnx
官方推荐python2,修改一下代码python3也可以运行,yolov3_to_onnx.py修改

1.parse_cfg_file函数里面remainder = cfg_file.read().decode()
2.if sys.version_info[0] > 2:这两句话判断python版本的注释掉
3.download_file函数md5哈希验证取消掉就可以了

执行python yolov3_to_onnx.py

onnx转trt
会出现’NoneType’ object has no attribute ‘serialize’,根据报错排查问题,可能是显存不够,也可能是初始化失败等等。

 python onnx_to_tensorrt.py

pytorch转trt推理

总体有几种思路:
1.pytorch转onnx转trt推理,类似于上面yolov3的例子
2.pytorch转onnx,onnx-tensorRT包
3.pytorch转caffe模型转trt
4.onnx直接推理

1.pytorch转onnx转trt推理

可能遇到的小bug:https://blog.csdn.net/qq_33120609/article/details/96565184?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task
pytorch转onnx

import torch
model = './model_300.pkl'
dummy_input = torch.randn(batch_size, 3, 300, 300, device='cuda')
model = torch.load(model)
torch.onnx.export(model, dummy_input,"mymodel.onnx" , verbose=False)

测试ONNX model 是否与PyTorch model 输出一致

import onnx_tensorrt.backend as backend
import cv2
import onnx
import numpy as np
​
model = onnx.load("model_300.onnx")
ngine = backend.prepare(model, device='CUDA:0')
path = '../Net/test.jpg'
img = cv2.imread(path)
print(img.shape)
img = cv2.resize(img,(300,300))
img = img.transpose(2,0,1)
img = np.ascontiguousarray(img)
img = img[np.newaxis,:]
print(img.shape)
input_data= img.astype(np.float32)
data =np.random.random(size=(3, 300, 300))
#data = data.transpose(2,0,1)
data = data[np.newaxis,:]
#input_data=data.astype(np.float32)
#input_data = np.random.random(size=(1, 3, 300, 300)).astype(np.float32)
output_data = engine.run(input_data)
print(output_data)

再根据yolov3的例子转trt

2.onnx-tensorRT包

我下载的是tensorRT是5.1.50版本的,注意装v5.1版本的代码,
https://github.com/onnx/onnx-tensorrt
https://github.com/onnx/onnx-tensorrt/tree/5.1

git clone --recursive https://github.com/onnx/onnx-tensorrt.git
git check 55d75dec4f289d1c7a63d5cf578b4c6ab441c03e
git checkout -b branch
mkdir build
cd build
cmake .. -DTENSORRT_ROOT=<tensorrt_install_dir>
OR
cmake .. -DTENSORRT_ROOT=<tensorrt_install_dir> -DGPU_ARCHS="61"
make -j8
sudo make install

onnx2trt model_300.onnx -o my_engine.trt

可能出现的小bug
(1)importError: No module named onnx_tensorrt.backend
去onnx-tensorrt 目录
(2)while trans to ONNX/TensorRT,[8] Assertion failed: get_shape_size(new_shape) == get_shape_size(tensor.getDimensions())
修改 变换时view中的 -1
(3) 如果你使用SSD或者类似,maxpool(ceil_mode) 不支持
修改用padding代替

3.pytorch转caffe模型转trt

pytorch版本小于0.4:https://github.com/longcw/pytorch2caffe
pytorch版本大于0.4:https://github.com/xxradon/PytorchToCaffe
pytorch1.0.1版本测试resnet18
代码很简单,修改example/resnet_pytorch_2_caffe.py如下:

import sys
sys.path.insert(0,'.')
import torch
from torch.autograd import Variable
from torchvision.models import resnet
import pytorch_to_caffe

if __name__=='__main__':
    name='resnet18'
    resnet18=resnet.resnet18()
    torch.save(resnet18.state_dict(),"test.pth")
    checkpoint = torch.load("test.pth")
    resnet18.load_state_dict(checkpoint)
    resnet18.eval()
    input=torch.ones([1,3,224,224])

    pytorch_to_caffe.trans_net(resnet18,input,name)
    pytorch_to_caffe.save_prototxt('{}.prototxt'.format(name))
    pytorch_to_caffe.save_caffemodel('{}.caffemodel'.format(name))

安装caffe
验证之前安装caffe,最难装的框架…
caffe安装:
https://blog.csdn.net/sinat_23619409/article/details/86466700
https://blog.csdn.net/pangyunsheng/article/details/79418896
caffe python接口安装
https://blog.csdn.net/dongdonglele521/article/details/86735682
坑最多的就是Makefile.config,特别是装了anaconda,需要屏蔽掉anacoda的lib,还有anacoda里面有几个链接库也会影响,根据报错卸载就可以了。贴一个自己的Makefile.config。

## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!
LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda3/lib
LIBRARIES += boost_thread stdc++ boost_regex
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0
# This code is taken from https://github.com/sh1r0/caffe-android-lib
# USE_HDF5 := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH := #-gencode arch=compute_20,code=sm_20 \
		#-gencode arch=compute_20,code=sm_21 \
		-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_52,code=sm_52 \
		-gencode arch=compute_60,code=sm_60 \
		-gencode arch=compute_61,code=sm_61 \
		-gencode arch=compute_61,code=compute_61

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
#PYTHON_INCLUDE := /usr/include/python2.7 \
#		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda3
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
#         $(ANACONDA_HOME)/include/python3.6m \
#         $(ANACONDA_HOME)/lib/python3.6/site-packages/numpy/core/include

# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python3 python3.6m
PYTHON_INCLUDE := /home/wxy/anaconda3/include/python3.6m \
                 /home/wxy/anaconda3/lib/python3.6/site-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial /usr/local/include/opencv4
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial/

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
LIBRARIES += glog gflags protobuf leveldb snappy \
        lmdb boost_system hdf5_hl hdf5 m \
        opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs
 
# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

验证
example/verify_deploy.py,代码思想主要是将某数据的输入分别调整为pytorch和caffe框架的输入格式,然后比较误差,我这里虽然转换成功了,但是还是有误差。
pytorch格式:[batch_size,3,h,w]

img = cv2.imread(imgfile)# 读取图像
img = cv2.resize(img,(224,224))#resize
img = img[np.newaxis,:, : ,:]#扩维(1,h,w,3)
img = img.transpose(0, 3, 1, 2)#维度转换(1,3,h,w)
#np转torch.tensor
image = torch.from_numpy(image)
image = Variable(image.cuda())
image = torch.tensor(image, dtype=torch.float32)
#载入模型推理
net=resnet.resnet18()
checkpoint = torch.load(weightfile)
net.load_state_dict(checkpoint)
net.eval()
blobs = net.forward(image)

caffe格式:

image = caffe.io.load_image(imgfile)
transformer = caffe.io.Transformer({'data': (1, 3, args.height, args.width)})
transformer.set_transpose('data', (2, 0, 1))#python读取的图片文件格式为H×W×K,需转化为K×H×W
transformer.set_mean('data', np.array([args.meanB, args.meanG, args.meanR]))
transformer.set_raw_scale('data', args.scale)
transformer.set_channel_swap('data', (2, 1, 0)) #通道转换RGB转BGR
image = transformer.preprocess('data', image)
image = image.reshape(1, 3, args.height, args.width)
#载入模型推理
net = caffe.Net(protofile, weightfile, caffe.TEST)
net.blobs['blob1'].reshape(1, 3, args.height, args.width)
net.blobs['blob1'].data[...] = image
output = net.forward()

最后验证结果是有误差的,很尴尬,暂时没有解决,先挖个坑吧
caffemodel转trt
tensorRT里面有很多sample

onnx直接推理

另一篇博客:https://blog.csdn.net/qq_38109843/article/details/104611245
tensorRT推理主要支持三种解析模型:caffe、onnx、uff,对于pytorch用onnx是最方便的。

你可能感兴趣的:(pytorch,tensorRT)