pytorch模型转onnx转ncnn

一、pth转onnx
训练生成好.pth文件后,参照网上的代码转onnx,网上代码如下

import torch
import torchvision

#define resnet18 model
model = torchvision.models.resnet18(pretrained=True)
#define input shape
x = torch.rand(1, 3, 224, 224)
#define input and output nodes, can be customized
input_names = ["x"]
output_names = ["y"]
#convert pytorch to onnx
torch_out = torch.onnx.export(model, x, "resnet18.onnx", input_names=input_names, output_names=output_names)

主要是model这个变量不好操作,我们把这段代码,插入到我们的模型测试文件中生成model的地方,下面这段代码是测试文件中生成model的代码:

    if os.path.exists(save_path):
        shutil.rmtree(save_path, ignore_errors=True)
    if not os.path.exists(save_path):
        os.makedirs(save_path)
    save_img_folder = os.path.join(save_path, 'img')
    if not os.path.exists(save_img_folder):
        os.makedirs(save_img_folder)
    save_txt_folder = os.path.join(save_path, 'result')
    if not os.path.exists(save_txt_folder):
        os.makedirs(save_txt_folder)
    img_paths = [os.path.join(path, x) for x in os.listdir(path)]
    net = PSENet(backbone=backbone, pretrained=False, result_num=config.n)
    model = Pytorch_model(model_path, net=net, scale=scale, gpu_id=gpu_id)
    total_frame = 0.0
    total_time = 0.0
    for img_path in tqdm(img_paths):
        img_name = os.path.basename(img_path).split('.')[0]
        save_name = os.path.join(save_txt_folder, 'res_' + img_name + '.txt')
        _, boxes_list, t = model.predict(img_path)
        total_frame += 1
        total_time += t
        img = draw_bbox(img_path, boxes_list, color=(0, 0, 255))
        cv2.imwrite(os.path.join(save_img_folder, '{}.jpg'.format(img_name)), img)
        np.savetxt(save_name, boxes_list.reshape(-1, 8), delimiter=',', fmt='%d')
    print('fps:{}'.format(total_frame / total_time))
    return save_txt_folder

我们在model变量后插入代码,并注释掉model,因为这里的model不能利用,重新写了代码插入

    if os.path.exists(save_path):
        shutil.rmtree(save_path, ignore_errors=True)
    if not os.path.exists(save_path):
        os.makedirs(save_path)
    save_img_folder = os.path.join(save_path, 'img')
    if not os.path.exists(save_img_folder):
        os.makedirs(save_img_folder)
    save_txt_folder = os.path.join(save_path, 'result')
    if not os.path.exists(save_txt_folder):
        os.makedirs(save_txt_folder)
    img_paths = [os.path.join(path, x) for x in os.listdir(path)]
    net = PSENet(backbone=backbone, pretrained=False, result_num=config.n)
    #model = Pytorch_model(model_path, net=net, scale=scale, gpu_id=gpu_id)
    ##################插入的代码#############################################
    net1 = load_model(net, './output/Best_926_r0.424205_p0.406323_f10.415072.pth')
    # define input shape
    x = torch.rand(1, 3, 224, 224)#这里改成自己的前处理操作
    # define input and output nodes, can be customized
    input_names = ["x"]
    output_names = ["y"]
    # convert pytorch to onnx
    torch_out = torch.onnx.export(net1, x, "resnet18.onnx", input_names=input_names, output_names=output_names)
    exit()
    #########################################################################
    total_frame = 0.0
    total_time = 0.0

load_model函数的代码如下:

def load_model(model, model_path):
    start_epoch = 0
    checkpoint = torch.load(model_path, map_location=lambda storage, loc: storage)
    # print('loaded {}, epoch {}'.format(model_path, checkpoint['epoch']))
    try:
        state_dict_ = checkpoint["state_dict"]
        state_dict = {}
    except:
        state_dict_ = checkpoint
        state_dict = {}
    # convert data_parallal to model
    for k in state_dict_:

        if k.startswith('module') and not k.startswith('module_list'):
            state_dict[k[7:]] = state_dict_[k]
        else:
            state_dict[k] = state_dict_[k]
    model_state_dict = model.state_dict()

    # check loaded parameters and created model parameters
    msg = 'If you see this, your model does not fully load the ' + \
          'pre-trained weight. Please make sure ' + \
          'you have correctly specified --arch xxx ' + \
          'or set the correct --num_classes for your own dataset.'
    for k in state_dict:
        if k in model_state_dict:
            if state_dict[k].shape != model_state_dict[k].shape:
                print('Skip loading parameter {}, required shape{}, ' \
                      'loaded shape{}. {}'.format(
                    k, model_state_dict[k].shape, state_dict[k].shape, msg))
                state_dict[k] = model_state_dict[k]
        else:
            print('Drop parameter {}.'.format(k) + msg)
    for k in model_state_dict:
        if not (k in state_dict):
            print('No param {}.'.format(k) + msg)
            state_dict[k] = model_state_dict[k]
    model.load_state_dict(state_dict, strict=False)
    return model

运行后生成onnx文件,会有提示:

/home/admin1/anaconda3/lib/python3.7/site-packages/torch/onnx/symbolic_helper.py:243: UserWarning: You are trying to export the model with onnx:Upsample for ONNX opset version 9. This operator might cause results to not match the expected results by PyTorch.
ONNX's Upsample/Resize operator did not match Pytorch's Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch's behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator. 
  "" + str(_export_onnx_opset_version) + ". "

我们在torch.onnx.export函数的参数里加上opset_version=11

torch_out = torch.onnx.export(net1, x, "resnet18.onnx", input_names=input_names, output_names=output_names,opset_version=11)

提示就会消失。

导出的onnx模型包含许多冗余的维度,这是不支持ncnn的,所以需要进行去掉冗余的维度。

python -m onnxsim resnet18.onnx resnet18-sim.onnx

二、onnx转ncnn
1、编译ncnn
参考文章:

https://blog.csdn.net/xiao13mm/article/details/106165477

2、转换成ncnn
ncnn编译完后,在build/tools/onnx里会生成个可执行文件onnx2ncnn

./onnx2ncnn resnet18-sim.onnx resnet18.param resnet18.bin

最后得到的parm和bin文件即是ncnn所需要的模型。

QQ交流群:1080729300

你可能感兴趣的:(ncnn,onnx,pytorch,深度学习,pytorch,神经网络)