[目标检测]--YOLOV3+ASFF-Learning Spatial Fusion for Single-Shot Object Detection

[目标检测]--YOLOV3+ASFF-Learning Spatial Fusion for Single-Shot Object Detection_第1张图片

创新点:

因为论文思想比较简单,我直接看了创新点,主要是为了解决FPN多层间不同特征尺度之间的不一致性问题,所以作者提出了对FPN新的融合方法:
[目标检测]--YOLOV3+ASFF-Learning Spatial Fusion for Single-Shot Object Detection_第2张图片
具体可以看图,首先通过FPN产生level1-level3不同尺度的特征图,作者使用ASFF(adaptively spatial feature fusion)思想进行融合,思想就是level1-level3个尺度图分别再融合成3个对应尺度的特征图,融合的权重自适应调整。拿ASFF-1作为例子,首先将3个尺度的图都resize到level1尺度大小,然后学习一个融合权重,这样可以更好地学习不同特征尺度对于预测特征图的贡献

代码:

class ASFF(nn.Module): 
     def __init__(self, level, rfb=False, vis=False): 
        super(ASFF, self).__init__() 
        self.level = level 
        self.dim = [512, 256, 256] 
        self.inter_dim = self.dim[self.level] 
        # 每个level融合前,需要先调整到一样的尺度
        if level==0: 
            self.stride_level_1 = add_conv(256, self.inter_dim, 3, 2) 
            self.stride_level_2 = add_conv(256, self.inter_dim, 3, 2) 
            self.expand = add_conv(self.inter_dim, 1024, 3, 1) 
        elif level==1: 
            self.compress_level_0 = add_conv(512, self.inter_dim, 1, 1) 
            self.stride_level_2 = add_conv(256, self.inter_dim, 3, 2) 
           self.expand = add_conv(self.inter_dim, 512, 3, 1) 
       elif level==2: 
           self.compress_level_0 = add_conv(512, self.inter_dim, 1, 1) 
           self.expand = add_conv(self.inter_dim, 256, 3, 1) 
       compress_c = 8 if rfb else 16  #when adding rfb, we use half number of channels to save memory 

       self.weight_level_0 = add_conv(self.inter_dim, compress_c, 1, 1) 
       self.weight_level_1 = add_conv(self.inter_dim, compress_c, 1, 1) 
       self.weight_level_2 = add_conv(self.inter_dim, compress_c, 1, 1) 

       self.weight_levels = nn.Conv2d(compress_c*3, 3, kernel_size=1, stride=1, padding=0) 
       self.vis= vis 
       
    def forward(self, x_level_0, x_level_1, x_level_2): 
        if self.level==0: 
           level_0_resized = x_level_0 
           level_1_resized = self.stride_level_1(x_level_1) 
 
           level_2_downsampled_inter =F.max_pool2d(x_level_2, 3, stride=2, padding=1) 
           level_2_resized = self.stride_level_2(level_2_downsampled_inter) 
 
       elif self.level==1: 
           level_0_compressed = self.compress_level_0(x_level_0) 
           level_0_resized =F.interpolate(level_0_compressed, scale_factor=2, mode='nearest') 
           level_1_resized =x_level_1 
           level_2_resized =self.stride_level_2(x_level_2) 
       elif self.level==2: 
           level_0_compressed = self.compress_level_0(x_level_0) 
           level_0_resized =F.interpolate(level_0_compressed, scale_factor=4, mode='nearest') 
           level_1_resized =F.interpolate(x_level_1, scale_factor=2, mode='nearest') 
          level_2_resized =x_level_2 
 
       level_0_weight_v = self.weight_level_0(level_0_resized) 
       level_1_weight_v = self.weight_level_1(level_1_resized) 
       level_2_weight_v = self.weight_level_2(level_2_resized) 
       levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v, level_2_weight_v),1) 
       # 学习的3个尺度权重
       levels_weight = self.weight_levels(levels_weight_v) 
       levels_weight = F.softmax(levels_weight, dim=1) 
       # 自适应权重融合
       fused_out_reduced = level_0_resized * levels_weight[:,0:1,:,:]+\ 
                           level_1_resized * levels_weight[:,1:2,:,:]+\ 
                           level_2_resized * levels_weight[:,2:,:,:] 
 
       out = self.expand(fused_out_reduced) 
 
       if self.vis: 
           return out, levels_weight, fused_out_reduced.sum(dim=1) 
       else: 
          return out 

你可能感兴趣的:([目标检测]--YOLOV3+ASFF-Learning Spatial Fusion for Single-Shot Object Detection)