目标检测算法——助力涨点 | YOLOv5改进结合Alpha-IoU

目标检测算法——助力涨点 | YOLOv5改进结合Alpha-IoU_第1张图片

深度学习Tricks,第一时间送达

论文题目:《Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression》

论文地址:  https://arxiv.org/abs/2110.13675v2

目标检测算法——助力涨点 | YOLOv5改进结合Alpha-IoU_第2张图片

1.论文简介:

文中,作者将现有的基于IoU Loss推广到一个新的Power IoU系列 Loss,该系列具有一个Power IoU项和一个附加的Power正则项,具有单个Power参数α。称这种新的损失系列为α-IoU Loss。在多目标检测基准和模型上的实验表明,α-IoU损失:

  • 可以显著地超过现有的基于IoU的损失;

  • 通过调节α,使检测器在实现不同水平的bbox回归精度方面具有更大的灵活性;

  • 对小数据集和噪声的鲁棒性更强。

实验结果表明,α(α>1)增加了high IoU目标的损失和梯度,进而提高了bbox回归精度。

power参数α可作为调节α-IoU损失的超参数以满足不同水平的bbox回归精度,其中α >1通过更多地关注High IoU目标来获得高的回归精度(即High IoU阈值)。

α对不同的模型或数据集并不过度敏感,在大多数情况下,α=3表现一贯良好。α-IoU损失家族可以很容易地用于改进检测器的效果,在干净或嘈杂的环境下,不会引入额外的参数,也不增加训练/推理时间。

2.相应代码:

def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, EIoU=False, alpha=3, eps=1e-9):
    # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4
    box2 = box2.T
 
    # Get the coordinates of bounding boxes
    if x1y1x2y2:  # x1, y1, x2, y2 = box1
        b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
        b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
    else:  # transform from xywh to xyxy
        b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
        b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
        b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
        b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
 
    # Intersection area
    inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
            (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
 
    # Union Area
    w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
    w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
    union = w1 * h1 + w2 * h2 - inter + eps
 
    # change iou into pow(iou+eps) 加入α次幂
    # alpha iou
    iou = torch.pow(inter / union + eps, alpha)
    beta = 2 * alpha
    if GIoU or DIoU or CIoU or EIoU:
        # 两个框的最小闭包区域的width和height
        cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1)  # convex (smallest enclosing box) width
        ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1)  # convex height
 
        if CIoU or DIoU or EIoU:  # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
            # 最小外接矩形 对角线的长度平方
            c2 = cw ** beta + ch ** beta + eps  # convex diagonal
            rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2)
            rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2)
            # 两个框中心点之间距离的平方
            rho2 = (rho_x ** beta + rho_y ** beta) / (2 ** beta)  # center distance
            if DIoU:
                return iou - rho2 / c2  # DIoU
 
            elif CIoU:  # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
                v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
                with torch.no_grad():
                    alpha_ciou = v / ((1 + eps) - inter / union + v)
                # return iou - (rho2 / c2 + v * alpha_ciou)  # CIoU
                return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha))  # CIoU
 
            # EIoU 在CIoU的基础上
            # 将预测框宽高的纵横比损失项 拆分成预测框的宽高分别与最小外接框宽高的差值
            # 加速了收敛提高了回归精度
            elif EIoU:
                rho_w2 = ((b2_x2 - b2_x1) - (b1_x2 - b1_x1)) ** beta
                rho_h2 = ((b2_y2 - b2_y1) - (b1_y2 - b1_y1)) ** beta
                cw2 = cw ** beta + eps
                ch2 = ch ** beta + eps
                return iou - (rho2 / c2 + rho_w2 / cw2 + rho_h2 / ch2)
 
            # GIoU https://arxiv.org/pdf/1902.09630.pdf
            c_area = torch.max(cw * ch + eps, union)  # convex area
            return iou - torch.pow((c_area - union) / c_area + eps, alpha)  # GIoU
    else:
        return iou  # torch.log(iou+eps) or iou
 

最后,将utils/loss.py文件中的iou=bbox_iou换成iou=bbox_alpha_iou即可。

关于YOLOv5的其他改进方法可关注并私信博主的CSDN。

你可能感兴趣的:(YOLOv5算法改进,深度学习,目标检测,计算机视觉)