github:https://github.com/facebookresearch/mae
论文:https://arxiv.org/abs/2111.06377
解读:何凯明最新一作MAE解读系列1
MAE实践主要从几个方面展开:
“知行合一”,“纸上得来终觉浅,觉知此事要躬行”
在jupyter中先快速可视化MAE的工作效果,以下展示使用MAE重构遮挡75%比例的图像效果,不用GPU即可完成操作。
Bug: __init__() got an unexpected keyword argument 'qk_scale' #58
https://github.com/facebookresearch/mae/issues/58#
Jupyter通过命令行保存成python格式
Jupyter通过命令行保存成python格式,代码如下(示例):
try:
!jupyter nbconvert --to python mae_visualize.ipynb
# python即转化为.py,script即转化为.html
# file_name.ipynb即当前module的文件名
except:
pass
Run MAE on the image
# make random mask reproducible (comment out to make it change)
torch.manual_seed(2)
print('MAE with pixel reconstruction:')
run_one_image(img, model_mae)
MAE with pixel reconstruction:
以上部分,可视化了MAE可视化的效果。主要由三个部分实现的,第一,随机采样:通过高比例如75%的遮挡像素块,留下可见的25%子块;第二,编码器将这1/4的子块进行表征学习,注意,没有任何的令牌输入到编码器中;第三,解码器将编码后的表征与3/4遮挡子块对应的令牌一起输入到解码器中,解码器的输出为归一化的像素值,他们将会重塑成有序的图像,即重构后的图片。
以上部分是预训练的过程。在图像推断或者识别中,去掉解码器,只保留编码器部分,并且输入到编码器的是一张完整的图片,不再遮挡掩码。
我们解读MAE的目的就是预训练后的模型,作为有效的骨架,来为下游任务提供服务。目前,原文中实现的下游任务有目标检测(Faster RCNN)和图像分割(UpperNet),我们的目标是将之应用到2D pose中。
所以我们的思路如下:首先爬到fine-tuned的分类模型,作为我们的backbone;第二,在backbone后设计一个light-weighted Head,输出heapmap以适应pose估计的任务。
Mask Autoencoder结构代码如下:
def __init__(self, img_size=224, patch_size=16, in_chans=3,
embed_dim=1024, depth=24, num_heads=16,
decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16,
mlp_ratio=4., norm_layer=nn.LayerNorm, norm_pix_loss=False):
super().__init__()
# --------------------------------------------------------------------------
# MAE encoder specifics
self.patch_embed = PatchEmbed(img_size, patch_size, in_chans, embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim), requires_grad=False) # fixed sin-cos embedding
self.blocks = nn.ModuleList([
Block(embed_dim, num_heads, mlp_ratio, qkv_bias=True, qk_scale=None, norm_layer=norm_layer)
for i in range(depth)])
self.norm = norm_layer(embed_dim)
# --------------------------------------------------------------------------
# --------------------------------------------------------------------------
# MAE decoder specifics
self.decoder_embed = nn.Linear(embed_dim, decoder_embed_dim, bias=True)
self.mask_token = nn.Parameter(torch.zeros(1, 1, decoder_embed_dim))
self.decoder_pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, decoder_embed_dim), requires_grad=False) # fixed sin-cos embedding
self.decoder_blocks = nn.ModuleList([
Block(decoder_embed_dim, decoder_num_heads, mlp_ratio, qkv_bias=True, qk_scale=None, norm_layer=norm_layer)
for i in range(decoder_depth)])
self.decoder_norm = norm_layer(decoder_embed_dim)
self.decoder_pred = nn.Linear(decoder_embed_dim, patch_size**2 * in_chans, bias=True) # decoder to patch
# --------------------------------------------------------------------------
self.norm_pix_loss = norm_pix_loss
self.initialize_weights()
MAE结构,主要包含两个部分构成,即encoder+decoder定义。encoder结构,基本上就是ViT-Large模型,包含24个transformer block;decoder 是8个transformer block的轻量级网络。
有关PatchEmbed、Block和pos_embed的原理与用法,会专门设置一个主题来讲原始的transformer。
patchify函数:将N×3×224×224图像变成图像块,每个图像块大小为16,并将16×16图像拉伸为1×256序列,这样输出为:N×196×768
def patchify(self, imgs):
"""
imgs: (N, 3, H, W)
x: (N, L, patch_size**2 *3)
"""
p = self.patch_embed.patch_size[0] #p=16
assert imgs.shape[2] == imgs.shape[3] and imgs.shape[2] % p == 0
h = w = imgs.shape[2] // p # 224/16=14
x = imgs.reshape(shape=(imgs.shape[0], 3, h, p, w, p))
x = torch.einsum('nchpwq->nhwpqc', x)
x = x.reshape(shape=(imgs.shape[0], h * w, p**2 * 3)) #14*14=196,16^2=256
return x
unpatchify函数:patchify的逆过程,将序列恢复到图像。即序列N×196×768,恢复到RGB图像N×3×224×224。
def unpatchify(self, x):
"""
x: (N, L, patch_size**2 *3)
imgs: (N, 3, H, W)
"""
p = self.patch_embed.patch_size[0] #16
h = w = int(x.shape[1]**.5) #14
assert h * w == x.shape[1]
x = x.reshape(shape=(x.shape[0], h, w, p, p, 3))
x = torch.einsum('nhwpqc->nchpwq', x)
imgs = x.reshape(shape=(x.shape[0], 3, h * p, h * p))
return imgs
random_masking函数:执行随机采样。随机采样的过程是,首先通过均匀分布产生一个随机的数组,然后将之进行高低排序,选择小数值即1/4部分保留,余下的去掉,即洗牌操作,可实现图像块的随机遮挡。
def random_masking(self, x, mask_ratio):
"""
Perform per-sample random masking by per-sample shuffling.
Per-sample shuffling is done by argsort random noise.
x: [N, L, D], sequence
"""
N, L, D = x.shape # batch, length, dim=N×196×768
len_keep = int(L * (1 - mask_ratio)) #保留的图像块个数为49
noise = torch.rand(N, L, device=x.device) # noise in [0, 1]
# sort noise for each sample
ids_shuffle = torch.argsort(noise, dim=1) # ascend: small is keep, large is remove
ids_restore = torch.argsort(ids_shuffle, dim=1)
# keep the first subset
ids_keep = ids_shuffle[:, :len_keep]
x_masked = torch.gather(x, dim=1, index=ids_keep.unsqueeze(-1).repeat(1, 1, D))
# generate the binary mask: 0 is keep, 1 is remove
mask = torch.ones([N, L], device=x.device)
mask[:, :len_keep] = 0
# unshuffle to get the binary mask
mask = torch.gather(mask, dim=1, index=ids_restore)
return x_masked, mask, ids_restore
有关torch.gather,可以参考torch官方文档说明。dim=1即按照列向量组合index,去取值mask。
forward_encoder函数:一张完整的图片输入到encoder中,首先执行常见的patch_embedding,然后添加2D_sin_cos位置信息;再执行随机掩码,这里随机掩码后的输出x与原始输入x的维度是否一致,从代码中看,masked后的输出添加了class token和pos_embed,因此两者的维度应该一致。然后将class token与masked后的x合并起来,输入到一个堆叠的block栈中。最后需要进行Layer Norm归一化。
def forward_encoder(self, x, mask_ratio):
# embed patches
x = self.patch_embed(x)
# add pos embed w/o cls token
x = x + self.pos_embed[:, 1:, :]
# masking: length -> length * mask_ratio
x, mask, ids_restore = self.random_masking(x, mask_ratio)
# append cls token
cls_token = self.cls_token + self.pos_embed[:, :1, :]
cls_tokens = cls_token.expand(x.shape[0], -1, -1)
x = torch.cat((cls_tokens, x), dim=1)
# apply Transformer blocks
for blk in self.blocks:
x = blk(x)
x = self.norm(x)
return x, mask, ids_restore
forward_decoder函数:输入到解码器中,包含两个部分,即encoded学习到的representation,和剩下3/4部分的图像块对应的掩码id。mask_token是个虚构的标记,文章中说是为了模仿NLP中的class_token而设计的一个虚构标记;然后将encoded的representation和mask_token通过tensor合并到一起,之后再添加class_token。这波操作,真的是看的人云里雾里。mask_token和class_token具体是怎样的,只能在debug中去打印出来,看下shape或者数据。然后再添加decoder的位置信息,输入到一个浅层的transformer栈中,最后除了layer norm外,还通过以全连接层映射到输出,并去掉class_token。不禁提问,为什么这么做?
def forward_decoder(self, x, ids_restore):
# embed tokens
x = self.decoder_embed(x)
# append mask tokens to sequence
mask_tokens = self.mask_token.repeat(x.shape[0], ids_restore.shape[1] + 1 - x.shape[1], 1)
x_ = torch.cat([x[:, 1:, :], mask_tokens], dim=1) # no cls token
x_ = torch.gather(x_, dim=1, index=ids_restore.unsqueeze(-1).repeat(1, 1, x.shape[2])) # unshuffle
x = torch.cat([x[:, :1, :], x_], dim=1) # append cls token
# add pos embed
x = x + self.decoder_pos_embed
# apply Transformer blocks
for blk in self.decoder_blocks:
x = blk(x)
x = self.decoder_norm(x)
# predictor projection
x = self.decoder_pred(x)
# remove cls token
x = x[:, 1:, :]
return x
forward_loss函数:输出是重构的向量,原始的图片可以通过patchify将图像数据转换成重构向量一样的维度。这里,原始图片即目标,需要像素值的归一化,首先取均值再求标准差。损失函数即均方误差,并且只在3/4部分上求误差。
def forward_loss(self, imgs, pred, mask):
"""
imgs: [N, 3, H, W]
pred: [N, L, p*p*3]
mask: [N, L], 0 is keep, 1 is remove,
"""
target = self.patchify(imgs)
if self.norm_pix_loss:
mean = target.mean(dim=-1, keepdim=True)
var = target.var(dim=-1, keepdim=True)
target = (target - mean) / (var + 1.e-6)**.5
loss = (pred - target) ** 2
loss = loss.mean(dim=-1) # [N, L], mean loss per patch
loss = (loss * mask).sum() / mask.sum() # mean loss on removed patches
return loss