目标检测 YOLOv5 - 推理时的数据增强

目标检测 YOLOv5 - 推理时的数据增强

flyfish

版本 YOLOv5 6.2

参考地址

https://github.com/ultralytics/yolov5/issues/303

在训练时可以使用数据增强,在推理阶段也可以使用数据增强
在测试使用数据增强有个名字叫做Test-Time Augmentation (TTA)

实际使用中使用了大中小三个不同分辨率,中间大小分辨率的图像进行了左右反转
大分辨率
480 * 640 宽度W 高度H 比例为1
目标检测 YOLOv5 - 推理时的数据增强_第1张图片
中分辨率
416 * 544 宽度W 高度H 比例为0.83

目标检测 YOLOv5 - 推理时的数据增强_第2张图片
小分辨率
352 * 448 宽度W 高度H 比例为0.67

目标检测 YOLOv5 - 推理时的数据增强_第3张图片

命令

python detect.py --weights ./yolov5s.pt --source ./data/images/bus.jpg  --imgsz 640 --augment

--augment语法
推理时默认不使用增强

import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-v", "--verbose", help="increase output verbosity",
                    action="store_true")
args = parser.parse_args()
if args.verbose:
    print("verbosity turned on")
else:
    print("verbosity turned off")

假如上段代码是test.py

# python test.py
# 输出     verbosity turned off

# python test.py -v
# 输出 verbosity turned on

验证图像大小是每个维度上的stride的倍数,默认是32的倍数
例如 图像大小是1111 那么就是
--img-size [1111, 1111] 更新为 [1120, 1120]

def check_img_size(imgsz, s=32, floor=0):
    # Verify image size is a multiple of stride s in each dimension
    if isinstance(imgsz, int):  # integer i.e. img_size=640
        new_size = max(make_divisible(imgsz, int(s)), floor)
    else:  # list i.e. img_size=[640, 480]
        imgsz = list(imgsz)  # convert to list if tuple
        new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz]
    if new_size != imgsz:
        LOGGER.warning(f'WARNING: --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}')
    return new_size

推理增强部分

def _forward_augment(self, x):
    img_size = x.shape[-2:]  # height, width
    s = [1, 0.83, 0.67]  # scales
    f = [None, 3, None]  # flips (2-ud, 3-lr)
    y = []  # outputs
    for si, fi in zip(s, f):
        xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
        print("xi.shape[2:]:",xi.shape[2:])
        yi = self._forward_once(xi)[0]  # forward
        print("0 yi:",yi.shape)
        #cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1])  # save
        yi = self._descale_pred(yi, fi, si, img_size)
        print("1 yi.shape:",yi.shape)
        y.append(yi)
    y = self._clip_augmented(y)  # clip augmented tails
    return torch.cat(y, 1), None  # augmented inference, train

def _descale_pred(self, p, flips, scale, img_size):
    # de-scale predictions following augmented inference (inverse operation)
    if self.inplace:
        p[..., :4] /= scale  # de-scale
        if flips == 2:
            p[..., 1] = img_size[0] - p[..., 1]  # de-flip ud
        elif flips == 3:
            p[..., 0] = img_size[1] - p[..., 0]  # de-flip lr
    else:
        x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale  # de-scale
        if flips == 2:
            y = img_size[0] - y  # de-flip ud
        elif flips == 3:
            x = img_size[1] - x  # de-flip lr
        p = torch.cat((x, y, wh, p[..., 4:]), -1)
    return p

def _clip_augmented(self, y):
    # Clip YOLOv5 augmented inference tails
    nl = self.model[-1].nl  # number of detection layers (P3-P5)
    g = sum(4 ** x for x in range(nl))  # grid points
    e = 1  # exclude layer count
    i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e))  # indices
    y[0] = y[0][:, :-i]  # large
    i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e))  # indices
    y[-1] = y[-1][:, i:]  # small
    return y

关于翻转看

if self.inplace:
    p[..., :4] /= scale  # de-scale
    if flips == 2:
        p[..., 1] = img_size[0] - p[..., 1]  # de-flip ud
    elif flips == 3:
        p[..., 0] = img_size[1] - p[..., 0]  # de-flip lr

2表示上下翻转
3表示左右翻转
s = [1, 0.83, 0.67] 是缩放比例,且能被32整除

这里的顺序是HW

xi.shape[2:]: torch.Size([640, 480])
xi.shape[2:]: torch.Size([544, 416])
xi.shape[2:]: torch.Size([448, 352])

yi.shape: torch.Size([1, 18900, 85])
yi.shape: torch.Size([1, 13923, 85])
yi.shape: torch.Size([1, 9702, 85])

合并去冗余之后再进NMS

torch.Size([1, 34233, 85])

原来推理一张图像,增强后是推理3张

你可能感兴趣的:(YOLOv5,嵌入式深度学习,深度学习基础,目标检测,YOLO)