关于部署过程中paddle、onnx、trt等模型转换(二)

文章目录

  • 前言
  • 一、paddle2onnx
  • 二、onnx2trt
  • 总结


前言

	上文我们讲述了如何将paddle1.0版本的模型转换为onnx以及trt格式,很多人私信问如何将2.0版本的进行
转换。因此,这篇文章着重讲述paddle2.0的转换过程。

一、paddle2onnx

	首先,准备好训练模型以及相应的yml配置文件,放到对应目录下。编辑程序第36行,设置onnx输出名,程序
如下:
# paddle2onnx.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import sys
 
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
 
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'

import paddle
 
from ppocr.modeling.architectures import build_model
from ppocr.utils.save_load import init_model
import tools.program as program
 
 
def main():
    global_config = config['Global']
 
    # build model
    model = build_model(config['Architecture'])
 
    init_model(config, model, logger)
 
    model.eval()
	
    #定义输入数据
    input_spec = paddle.static.InputSpec(shape=[1, 3, 640, 640], dtype='float32', name='data')
 
    #ONNX模型导出
    paddle.onnx.export(model, "****", input_spec=[input_spec], opset_version=10)
 
 
if __name__ == '__main__':
    config, device, logger, vdl_writer = program.preprocess()
    main()
	编辑好之后,在控制台输入命令:
python <your_dir_name>/paddle2onnx.py  -c <your_yml_path> \
          -o Global.checkpoints=''
	例如:python my_programe/paddle2onnx.py  -c output/yanzhou_ID_detect/config.yml \
	            -o Global.checkpoints='./output/yanzhou_ID_detect/best_accuracy'
	回车,如果产生了相应的onnx文件,则表示这一步成功了。

二、onnx2trt

	如果说,paddle1.0的转换让你贼苦恼,那么,paddle2.0的转换将给你巨大的反差。将你转换出来的onnx
模型放到nvidia盒子上,编辑程序:
# onnx2trt.py
import tensorrt as trt

def ONNX_build_engine(trt_model_name,onnx_model_name):
    G_LOGGER = trt.Logger()
    explicit_batch = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)#trt7
    with trt.Builder(G_LOGGER) as builder, builder.create_network(explicit_batch) as network, trt.OnnxParser(network, G_LOGGER) as parser:
        builder.max_batch_size = 1
        builder.max_workspace_size = 1 << 30
        print('Loading ONNX file from path {}...'.format(onnx_model_name))
        with open(onnx_model_name, 'rb') as model:
            print('Beginning ONNX file parsing')
            #b = parser.parse(model.read())
            if not parser.parse(model.read()):
             for error in range(parser.num_errors):
                print(parser.get_error(error))
        if 1:
            print('Completed parsing of ONNX file')
            print('Building an engine from file {}; this may take a while...'.format(onnx_model_name))
 
            ####
            # builder.int8_mode = True
            # builder.int8_calibrator = calib
            builder.fp16_mode = True
            ####
            print("num layers:",network.num_layers)
            last_layer = network.get_layer(network.num_layers - 1)
            # if not last_layer.get_output(0):
                # network.mark_output(network.get_layer(network.num_layers - 1).get_output(0))# 有的模型需要,有的模型在转onnx的之后已经指定了,就不需要这行
            network.get_input(0).shape = [1, 3, 640, 640]# trt7
            engine = builder.build_cuda_engine(network)
            print("engine:",engine)
            print("Completed creating Engine")
            with open(trt_model_name, "wb") as f:
                f.write(engine.serialize())
            return engine
 
        else:
            print('Number of errors: {}'.format(parser.num_errors))
            error = parser.get_error(0) # if it gets mnore than one error this have to be changed
            del parser
            desc = error.desc()
            line = error.line()
            code = error.code()
            print('Description of the error: {}'.format(desc))
            print('Line where the error occurred: {}'.format(line))
            print('Error code: {}'.format(code))
            print("Model was not parsed successfully")
            exit(0)

ONNX_build_engine('engine路径','onnx路径')
	运行命令:
python3 onnx2trt.py
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
engine: None
Completed creating Engine
Traceback (most recent call last):
  File "onnx2trt.py", line 49, in <module>
    ONNX_build_engine('engine','onnx')
  File "onnx2trt.py", line 33, in ONNX_build_engine
    f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'
	若出现,以上情况,说明输入输出的定位出现了问题,此时将代码中,
if not last_layer.get_output(0)这两行放开,则问题解决:
Loading ONNX file from path **.onnx...
Beginning ONNX file parsing
[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Completed parsing of ONNX file
Building an engine from file **.onnx; this may take a while...
num layers: 368
engine: <tensorrt.tensorrt.ICudaEngine object at 0x7f65cef618>
Completed creating Engine

总结

	其实,总的来说,转换过程中,最重要的就是要关注模型的结构,输入输出等问题。

你可能感兴趣的:(部署,python,paddlepaddle,pytorch)