MobileNet V2模型——pytorch实现

论文传送门:https://arxiv.org/pdf/1801.04381.pdf

MobileNet V2的目的:对图片进行特征提取,依据特征进行分类。(也可以作为backbone完成检测和分割任务)

MobileNet V2的优点:相较于V1,模型更小精度更高

MobileNet V2的方法:

①使用倒置残差结构(Inverted Residuals),1x1卷积升维+3x3深度卷积+1x1卷积降维;

②结构最后一层使用线性层(Linear Bottlenecks),不再进行ReLU6激活函数。

Inverted residual block 与 Residual block 的区别:

Inverted residual block:1x1ConvBNReLU6(升维)+3x3DepthwiseConvBNReLU6+1x1ConvBN(降维)+x;

Residual block:1x1ConvBNReLU(降维)+3x3ConvBNReLU+1x1ConvBN(升维)+x+ReLU;

MobileNet V2模型——pytorch实现_第1张图片

Inverted residual block的结构:(只有深度卷积步长为1结构前后通道数不变时进行残差边连接)

MobileNet V2模型——pytorch实现_第2张图片

MobileNet V2的结构:卷积块+不断堆叠的倒置残差结构+卷积块+平均池化+卷积

MobileNet V2模型——pytorch实现_第3张图片

 注:如想使用预训练权值,请参考pytorch官方实现代码。

import torch
import torch.nn as nn


def conv_block(in_channel, out_channel, kernel_size=3, strid=1, groups=1):  # 定义卷积块,conv+bn+relu6
    padding = 0 if kernel_size == 1 else 1  # 计算padding
    return nn.Sequential(
        nn.Conv2d(in_channel, out_channel, kernel_size, strid, padding=padding, groups=groups, bias=False),  # conv
        nn.BatchNorm2d(out_channel),  # bn
        nn.ReLU6(inplace=True)  # relu6
    )


class InvertedResidual(nn.Module):  # 定义倒置残差结构,Inverted Residual
    def __init__(self, in_channel, out_channel, strid, t=6):  # 初始化方法
        super(InvertedResidual, self).__init__()  # 继承初始化方法
        self.in_channel = in_channel  # 输入通道数
        self.out_channel = out_channel  # 输出通道数
        self.strid = strid  # 步长
        self.t = t  # 中间层通道扩大倍数,对应原文expansion ratio
        self.hidden_channel = in_channel * t  # 计算中间层通道数

        layers = []  # 存放模型结构
        if self.t != 1:  # 如果expansion ratio不为1
            layers += [conv_block(self.in_channel, self.hidden_channel, kernel_size=1)]  # 添加conv+bn+relu6
        layers += [conv_block(self.hidden_channel, self.hidden_channel, strid=self.strid, groups=self.hidden_channel),
                   # 添加conv+bn+relu6,此处使用组数等于输入通道数的分组卷积实现depthwise conv
                   conv_block(self.hidden_channel, self.out_channel, kernel_size=1)[
                   :-1]]  # 添加1x1conv+bn,此处不再进行relu6
        self.residul_block = nn.Sequential(*layers)  # 倒置残差结构块

    def forward(self, x):  # 前传函数
        if self.strid == 1 and self.in_channel == self.out_channel:  # 如果卷积步长为1且前后通道数一致,则连接残差边
            return x + self.residul_block(x)  # x+F(x)
        else:  # 否则不进行残差连接
            return self.residul_block(x)  # F(x)


class MobileNetV2(nn.Module):  # 定义MobileNet v2网络
    def __init__(self, num_classes):  # 初始化方法
        super(MobileNetV2, self).__init__()  # 继承初始化方法

        self.num_classes = num_classes  # 类别数量
        self.feature = nn.Sequential(  # 特征提取部分
            conv_block(3, 32, strid=2),  # conv+bn+relu6,(n,3,224,224)-->(n,32,112,112)
            InvertedResidual(32, 16, strid=1, t=1),  # inverted residual block,(n,32,112,112)-->(n,16,112,112)
            InvertedResidual(16, 24, strid=2),  # inverted residual block,(n,16,112,112)-->(n,24,56,56)
            InvertedResidual(24, 24, strid=1),  # inverted residual block,(n,24,56,56)-->(n,24,56,56)
            InvertedResidual(24, 32, strid=2),  # inverted residual block,(n,24,56,56)-->(n,32,28,28)
            InvertedResidual(32, 32, strid=1),  # inverted residual block,(n,32,28,28)-->(n,32,28,28)
            InvertedResidual(32, 32, strid=1),  # inverted residual block,(n,32,28,28)-->(n,32,28,28)
            InvertedResidual(32, 64, strid=2),  # inverted residual block,(n,32,28,28)-->(n,64,14,14)
            InvertedResidual(64, 64, strid=1),  # inverted residual block,(n,64,14,14)-->(n,64,14,14)
            InvertedResidual(64, 64, strid=1),  # inverted residual block,(n,64,14,14)-->(n,64,14,14)
            InvertedResidual(64, 64, strid=1),  # inverted residual block,(n,64,14,14)-->(n,64,14,14)
            InvertedResidual(64, 96, strid=1),  # inverted residual block,(n,64,14,14)-->(n,96,14,14)
            InvertedResidual(96, 96, strid=1),  # inverted residual block,(n,96,14,14)-->(n,96,14,14)
            InvertedResidual(96, 96, strid=1),  # inverted residual block,(n,96,14,14)-->(n,96,14,14)
            InvertedResidual(96, 160, strid=2),  # inverted residual block,(n,96,14,14)-->(n,160,7,7)
            InvertedResidual(160, 160, strid=1),  # inverted residual block,(n,160,7,7)-->(n,160,7,7)
            InvertedResidual(160, 160, strid=1),  # inverted residual block,(n,160,7,7)-->(n,160,7,7)
            InvertedResidual(160, 320, strid=1),  # inverted residual block,(n,160,7,7)-->(n,320,7,7)
            conv_block(320, 1280, kernel_size=1)  # conv+bn+relu6,(n,320,7,7)-->(n,1280,7,7)
        )

        self.classifier = nn.Sequential(  # 分类部分
            nn.AdaptiveAvgPool2d(1),  # avgpool,(n,1280,7,7)-->(n,1280,1,1)
            nn.Conv2d(1280, self.num_classes, 1, 1, 0)  # 1x1conv,(n,1280,1,1)-->(n,num_classes,1,1),等同于linear
        )

    def forward(self, x):  # 前传函数
        x = self.feature(x)  # 提取特征
        x = self.classifier(x)  # 分类
        return x.view(-1, self.num_classes)  # 压缩不需要的维度,返回分类结果,(n,num_classes,1,1)-->(n,num_classes)

你可能感兴趣的:(pytorch,深度学习,人工智能)