Swin Transformer中的PatchEmbed原理及代码说明

 1.分块patch partition

use a patch size of 4 × 4 and thus the feature dimension of eachn patch is 4 × 4 × 3 = 48
在这里设置了4 × 4× 3的块的大小,原始图像被 分成维度为4 × 4 × 3 = 48的小块。

2.线性编码linear embedding

A linear embedding layer is applied on this raw-valued feature to project it to an arbitrary dimension (denoted as C)

在这里应用了一个线性变换,这个线性变换是一个2d卷积,映射到一个任意的维度,这个维度是2d卷积的结果通道数。这里2d卷积的卷积核大小为块的大小,步长为块的大小。进行线性变换。

3.图例

Swin Transformer中的PatchEmbed原理及代码说明_第1张图片

 4.代码

import torch
import torch.nn as nn


class PatchEmbed(nn.Module):
    """ Image to Patch Embedding
    """
    # 第1步:通过patch_size=4,设置块的大小,对原始图像进行分块
    def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=768):
        super().__init__()
        img_size = (img_size, img_size)
        patch_size = (patch_size, patch_size)
        num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
        self.img_size = img_size
        self.patch_size = patch_size
        self.num_patches = num_patches

        self.project = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)

    def forward(self, x):
        B, C, H, W = x.shape
        # FIXME look at relaxing size constraints
        assert H == self.img_size[0] and W == self.img_size[1], \
            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
        print(x.shape)
        x = self.project(x)  # 第2步:通过2d卷积进行线性变换
        print(x.shape)
        x = x.flatten(2)     # 第3步:拉平生成线性变量
        print(x.shape)
        x = x.transpose(1, 2) # 第4步:块的个数 与 每块的向量维度交换位置
        print(x.shape)
        return x


if __name__ == "__main__":
    x = torch.rand([1, 3, 224, 224])

    model = PatchEmbed()
    y = model(x)
    print(y.shape)

 

你可能感兴趣的:(pytorch,深度学习,python)