代码阅读-deformable DETR (一)

因为相对transformer做一些改动看看效果,所以接下来这几天先来看看deformable DETR的代码实现。

先来看models的内容:


models文件

文件position_encoding.py

"""
Various positional encodings for the transformer.
"""
import math
import torch
from torch import nn
from util.misc import NestedTensor

头文件中只有一个NestedTensor需要解释,其本身其实是一种对于多个tensor的集合的封装,让该集合的tensor同时变换。定义如下:

class NestedTensor(object):
    def __init__(self, tensors, mask: Optional[Tensor]):
        self.tensors = tensors
        self.mask = mask

对位置进行sine公式构造编码的类PositionEmbeddingSine

class PositionEmbeddingSine(nn.Module):
    """
    This is a more standard version of the position embedding, very similar to the one
    used by the Attention is all you need paper, generalized to work on images.
    """
    def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
        super().__init__()
        self.num_pos_feats = num_pos_feats  #每一个点的编码长度
        self.temperature = temperature
        self.normalize = normalize
        if scale is not None and normalize is False:
            raise ValueError("normalize should be True if scale is passed")
        if scale is None:
            scale = 2 * math.pi
        self.scale = scale

    def forward(self, tensor_list: NestedTensor):
        x = tensor_list.tensors
        mask = tensor_list.mask
        assert mask is not None
        not_mask = ~mask
        y_embed = not_mask.cumsum(1, dtype=torch.float32)  # 列累加和,唯一的位置表示数字
        x_embed = not_mask.cumsum(2, dtype=torch.float32) # 行累加和
        if self.normalize:
            eps = 1e-6
            y_embed = (y_embed - 0.5) / (y_embed[:, -1:, :] + eps) * self.scale
            x_embed = (x_embed - 0.5) / (x_embed[:, :, -1:] + eps) * self.scale

        dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)  # 特征维度上的索引值
        dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) 

        pos_x = x_embed[:, :, :, None] / dim_t
        pos_y = y_embed[:, :, :, None] / dim_t
        pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3)
        pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3)
        pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
        return pos

这部分来自于中的公式:

position embedding

最终获得每个位置独一无二的长度为num_pos_feats的位置编码。

除此之外,还有可以学习的位置编码

class PositionEmbeddingLearned(nn.Module):
    """
    Absolute pos embedding, learned.
    """
    def __init__(self, num_pos_feats=256):
        super().__init__()
        self.row_embed = nn.Embedding(50, num_pos_feats)
        self.col_embed = nn.Embedding(50, num_pos_feats)
        self.reset_parameters()

    def reset_parameters(self):
        nn.init.uniform_(self.row_embed.weight)
        nn.init.uniform_(self.col_embed.weight)

    def forward(self, tensor_list: NestedTensor):
        x = tensor_list.tensors
        h, w = x.shape[-2:]
        i = torch.arange(w, device=x.device)
        j = torch.arange(h, device=x.device)
        x_emb = self.col_embed(i)
        y_emb = self.row_embed(j)
        pos = torch.cat([
            x_emb.unsqueeze(0).repeat(h, 1, 1),
            y_emb.unsqueeze(1).repeat(1, w, 1),
        ], dim=-1).permute(2, 0, 1).unsqueeze(0).repeat(x.shape[0], 1, 1, 1)
        return pos

这个里面对于我来说可能nn.Embedding这层还是第一次见,这里其实表示产生个长度为的可学习的编码向量,这个50应该是实际情况设置的,比如featmap的尺寸大小。然后这一层forward时是选择对应索引的编码向量。比如x_emb=self.col_embed(i)就是选择i对应的索引所指向的编码向量。
两个不同编码方式同意的接口:

def build_position_encoding(args):
    N_steps = args.hidden_dim // 2
    if args.position_embedding in ('v2', 'sine'):
        # TODO find a better way of exposing other arguments
        position_embedding = PositionEmbeddingSine(N_steps, normalize=True)
    elif args.position_embedding in ('v3', 'learned'):
        position_embedding = PositionEmbeddingLearned(N_steps)
    else:
        raise ValueError(f"not supported {args.position_embedding}")
    return position_embedding

backbone.py文件里主要有一个torchvision.model._utils.IntermediateLayerGetter, 这个函数主要是从模型中设置输出的层,代码如下:

class IntermediateLayerGetter(nn.ModuleDict):
    """
    #Module封装器,用以返回model的中间若干层输出
    Module wrapper that returns intermediate layers from a model
    # 有一个强假设,即模块在使用时与模型中注册顺序相同
    It has a strong assumption that the modules have been registered
    into the model in the same order as they are used.
    # 这表示forward中的module不能使用两次
    This means that one should **not** reuse the same nn.Module
    twice in the forward if you want this to work.
    # 另外,只能query model中直接注册的层,而不能使用间接的层,比如resnet中层可能为layer1.1.conv1等,此时只能query layer1, 而不能使用layer1.1
    Additionally, it is only able to query submodules that are directly
    assigned to the model. So if `model` is passed, `model.feature1` can
    be returned, but not `model.feature1.layer2`.

    Arguments:
        model (nn.Module): model on which we will extract the features
        # 模型:用于被选择层的model,比如resnet
        return_layers (Dict[name, new_name]): a dict containing the names
            of the modules for which the activations will be returned as
            the key of the dict, and the value of the dict is the name
            of the returned activation (which the user can specify).
        # 字典,设定被选择的层,name是model的层名, new_name是重新命名的层
    
    def __init__(self, model, return_layers):
        if not set(return_layers).issubset([name for name, _ in model.named_children()]):
            raise ValueError("return_layers are not present in model")
 
        orig_return_layers = return_layers
        return_layers = {k: v for k, v in return_layers.items()}
        layers = OrderedDict()
        for name, module in model.named_children():
            layers[name] = module
            if name in return_layers:
                del return_layers[name]
            if not return_layers:
                break
 
        super(IntermediateLayerGetter, self).__init__(layers)
        self.return_layers = orig_return_layers
 
    def forward(self, x):
        out = OrderedDict()
        for name, module in self.named_children():
            x = module(x)
            if name in self.return_layers:
                out_name = self.return_layers[name]
                out[out_name] = x
        return out

举个例子:

m = torchvision.models.resnet50(pretrained=True)
new_m = torchvision.models._utils.IntermediateLayerGetter(m,{'layer1': '1', 'layer2': '2'})
out = new_m(torch.rand(1, 3, 224, 224))
print([(k, v.shape) for k, v in out.items()])

可以发现重新命名了,且内容从resnet50的layer1和layer2取出。

backbone的创建函数:

def build_backbone(args):
    position_embedding = build_position_encoding(args)  # 位置编码,每个位置对应一个向量, NxdxHxW
    train_backbone = args.lr_backbone > 0  # 是否固定训练好的参数
    return_interm_layers = args.masks or (args.num_feature_levels > 1)  # 返回的中间层, True的话返回2,3,4层
    backbone = Backbone(args.backbone, train_backbone, return_interm_layers, args.dilation)
    model = Joiner(backbone, position_embedding)  # 返回每个中间层的输出,和对应的pos编码
    return model

这两个文件主要是定义了backbone的内容,提供图像CNN之后的特征以及对应的位置编码,下面我们关注一下transformer的内容。

你可能感兴趣的:(代码阅读-deformable DETR (一))