opencv的dnn模块部署yolov5模型,pt转pth转onnx

opencv的dnn模块部署yolov5模型,pt转pth转onnx

看了大佬写的用opencv的dnn模块做yolov5目标检测对于里面的方法进行了探讨

关于pt转pth

自己尝试了很久也遇到各种问题,这其中的坑就不说了
torch.load()和torch.save()是大坑
原作者在文章的后面跟新了pt转pth的代码,这里我再放一下链接链接:pt2pth
提取码:4y6s

import torch
from collections import OrderedDict
import pickle
import os

device = 'cuda' if torch.cuda.is_available() else 'cpu'

if __name__ == '__main__':
    choices = ['yolov5s', 'yolov5l', 'yolov5m', 'yolov5x']//修改yolov5s为你自己模型的名字
    modelfile = choices[0] + '.pt'
    utl_model = torch.load(modelfile, map_location=device)
    utl_param = utl_model['model'].model
    torch.save(utl_param.state_dict(), os.path.splitext(modelfile)[0] + '.pth')
    own_state = utl_param.state_dict()
    print(len(own_state))

    numpy_param = OrderedDict()
    for name in own_state:
        numpy_param[name] = own_state[name].data.cpu().numpy()
    print(len(numpy_param))
    with open(os.path.splitext(modelfile)[0] + '_numpy_param.pkl', 'wb') as fw:
        pickle.dump(numpy_param, fw)

关于pth转onnx

大家可以去看原作者写的他的github上有,我用的1.0
这里说一下我遇到的问题

  1. 首先是官方yolov5的版本
    我用的yolov5-V3.0版本
  2. 再pth导出到onnx时,需要配置相应的网络模型,需要把网络模型yaml文件转换成py文件这是作者写的yaml2py
    但是我遇到了很多问题,首先是导出的py文件无法被读取,这里需要自己再添加一个My_YOLO函数,修改最后的return
from common import *

class My_YOLO_backbone_head(nn.Module):
    def __init__(self, num_classes=12, anchors=[[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], training=False):
        super().__init__()
        self.seq0_Focus = Focus(3, 32, 3)
        self.seq1_Conv = Conv(32, 64, 3, 2)
        self.seq2_BottleneckCSP = BottleneckCSP(64, 64, 1)
        self.seq3_Conv = Conv(64, 128, 3, 2)
        self.seq4_BottleneckCSP = BottleneckCSP(128, 128, 3)
        self.seq5_Conv = Conv(128, 256, 3, 2)
        self.seq6_BottleneckCSP = BottleneckCSP(256, 256, 3)
        self.seq7_Conv = Conv(256, 512, 3, 2)
        self.seq8_SPP = SPP(512, 512, [5, 9, 13])
        self.seq9_BottleneckCSP = BottleneckCSP(512, 512, 1, False)
        self.seq10_Conv = Conv(512, 256, 1, 1)
        self.seq13_BottleneckCSP = BottleneckCSP(512, 256, 1, False)
        self.seq14_Conv = Conv(256, 128, 1, 1)
        self.seq17_BottleneckCSP = BottleneckCSP(256, 128, 1, False)
        self.seq18_Conv = Conv(128, 128, 3, 2)
        self.seq20_BottleneckCSP = BottleneckCSP(256, 256, 1, False)
        self.seq21_Conv = Conv(256, 256, 3, 2)
        self.seq23_BottleneckCSP = BottleneckCSP(512, 512, 1, False)
    def forward(self, x):
        x = self.seq0_Focus(x)
        x = self.seq1_Conv(x)
        x = self.seq2_BottleneckCSP(x)
        x = self.seq3_Conv(x)
        xRt0 = self.seq4_BottleneckCSP(x)
        x = self.seq5_Conv(xRt0)
        xRt1 = self.seq6_BottleneckCSP(x)
        x = self.seq7_Conv(xRt1)
        x = self.seq8_SPP(x)
        x = self.seq9_BottleneckCSP(x)
        xRt2 = self.seq10_Conv(x)
        route = F.interpolate(xRt2, size=(int(xRt2.shape[2] * 2), int(xRt2.shape[3] * 2)), mode='nearest')
        x = torch.cat([route, xRt1], dim=1)
        x = self.seq13_BottleneckCSP(x)
        xRt3 = self.seq14_Conv(x)
        route = F.interpolate(xRt3, size=(int(xRt3.shape[2] * 2), int(xRt3.shape[3] * 2)), mode='nearest')
        x = torch.cat([route, xRt0], dim=1)
        out0 = self.seq17_BottleneckCSP(x)/out0
        route = self.seq18_Conv(out0)//out0
        x = torch.cat([route, xRt3], dim=1)
        out1 = self.seq20_BottleneckCSP(x)//out1
        route = self.seq21_Conv(out1)//out1
        x = torch.cat([route, xRt2], dim=1)
        out2 = self.seq23_BottleneckCSP(x)//out2
        return out0, out1, out2
//修改最后的return,按照上面的格式

class My_YOLO(nn.Module)://这里需要自己添加
    def __init__(self, num_classes, anchors=(), training=False):
        super().__init__()
        self.backbone_head = My_YOLO_backbone_head()
        self.yolo_layers = Yolo_Layers(nc=num_classes, anchors=anchors, ch=(128,256,512),training=training)
    def forward(self, x):
        out0, out1, out2 = self.backbone_head(x)
        output = self.yolo_layers([out0, out1, out2])
        return output

其次运行convert_onnx休要修改其中的类型名,改成自己模型的名字
opencv的dnn模块部署yolov5模型,pt转pth转onnx_第1张图片
这里就需要把自己模型结构的py文件名称替换yolov5s,这里就是上面需要My_YOLO函数的原因
opencv的dnn模块部署yolov5模型,pt转pth转onnx_第2张图片

说一下最后的结果

模型成功导出,opencv的dnn模块也可以成功读取,但是经过2次模型转换,精度下降很多,不清楚是我自己的原因还是其他,检测效果很差
希望有思路的同志交流一下

下一篇;关于原文中,修改Focus类,把Detect类里面的1x1卷积定义在紧邻着Detect类之前的外面,然后去掉Detect类,组成新的model然后使用官方的export.py直接导出onnx

参考:https://blog.csdn.net/nihate/article/details/112731327#comments_14884604

你可能感兴趣的:(深度学习,深度学习,python,opencv)