训练YoloV5 PyTorch模型 笔记

YOLO v5 的 PyTorch环境

pip3 install --user torch==1.7.0 torchvision==0.8.1 seaborn==0.11.0 tqdm==4.56.0

YoloV5的PyTorch模型转换ONNX模型:

参考:

  • ONNX, TorchScript and CoreML Model Export
  • CoreML export failure: unexpected number of inputs for node x.2 (_convolution):

将PyTorch版本由1.7.0下降为1.6.0

pip install torch==1.6.0 torchvision==0.7.0

生成ONNX模型

python models/export.py --weights mydata/models/best-m-20210121.pt --img 640 --batch 1

ONNX版本

onnx==1.8.0
onnxruntime==1.4.0

ONNX的部署参考:https://docs.microsoft.com/zh-cn/azure/machine-learning/concept-onnx

修改models/export.py,将export修改为False,输出4个参数。

model.model[-1].export = False

四个参数参考:

  • 第1个参数是torch.cat(z, 1),是已有模型的真正输出。
    def forward(self, x):
        # x = x.copy()  # for profiling
        z = []  # inference output
        self.training |= self.export
        for i in range(self.nl):
            x[i] = self.m[i](x[i])  # conv
            bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)
            x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()

            if not self.training:  # inference
                if self.grid[i].shape[2:4] != x[i].shape[2:4]:
                    self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

                y = x[i].sigmoid()
                y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
                y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
                z.append(y.view(bs, -1, self.no))

        return x if self.training else (torch.cat(z, 1), x)

修改,将auto设置为False,将scaleFill设置为True。

def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):
    # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232
    shape = img.shape[:2]  # current shape [height, width]
    if isinstance(new_shape, int):
        new_shape = (new_shape, new_shape)

    # Scale ratio (new / old)
    r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
    if not scaleup:  # only scale down, do not scale up (for better test mAP)
        r = min(r, 1.0)

    # Compute padding
    ratio = r, r  # width, height ratios
    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
    dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding
    if auto:  # minimum rectangle
        dw, dh = np.mod(dw, 32), np.mod(dh, 32)  # wh padding
    elif scaleFill:  # stretch
        dw, dh = 0.0, 0.0
        new_unpad = (new_shape[1], new_shape[0])
        ratio = new_shape[1] / shape[1], new_shape[0] / shape[0]  # width, height ratios

    dw /= 2  # divide padding into 2 sides
    dh /= 2

    if shape[::-1] != new_unpad:  # resize
        img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
    img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
    return img, ratio, (dw, dh)

训练YoloV5 PyTorch模型,https://github.com/ultralytics/yolov5/

nohup python3 -u train.py --img-size 640 --batch-size 64 --epochs 100000 --data ./data/oral_counting.yaml --cfg ./models/yolov5x.my.yaml --weights ./mydata/models/yolov5x.pt > nohup.out &

验证脚本:

python3 val.py --img-size 640 --batch-size 64 --data ./data/oral_counting.yaml --weights ./mydata/models/best_20210831.pt

m版本:

nohup python3 -u train.py --img-size 640 --batch-size 32 --epochs 300 --data ./data/ps.yaml --cfg ./models/yolov5m.my.yaml --weights ./mydata/models/yolov5m.pt > nohup.20200120.out &

m版本v3:

nohup python3 -u train.py --img-size 1024 --batch-size 32 --epochs 300 --data ./data/ps.v3.yaml --cfg ./models/yolov5m.my.yaml --weights ./mydata/models/yolov5m.pt > nohup.20200121.v3.out &

s版本:

nohup python3 -u train.py --img-size 640 --batch-size 40 --epochs 300 --data ./data/ps.yaml --cfg ./models/yolov5s.my.yaml --weights ./mydata/models/yolov5s.pt > nohup.20200120-s.out &

依赖库

torch==1.7.0
torchvision==0.8.1
seaborn==0.11.0
tqdm==4.56.0
pip3 install --user torch==1.7.0 torchvision==0.8.1 seaborn==0.11.0 tqdm==4.56.0

训练YoloV4模型 Tensorflow

失败

GitHub:https://github.com/hunglc007/tensorflow-yolov4-tflite

TensorFlow版本

pip list | grep 'tensorflow'

tensorflow                2.1.0
tensorflow-estimator      2.1.0

数据格式,参考data/dataset/val2014.txt文件:

图像服务器地址(xxx.jpg) x_min,y_min,x_max,y_max,label x_min,y_min,x_max,y_max,label

修改配置文件core/config.py:替换训练和测试的数据文件,还有标签。

你可能感兴趣的:(pytorch,深度学习,神经网络)