残差网络ResNet

残差网络的核心思想:每个附加层更容易地包含原始函数作为其元素之一。 通俗点就是类似与一个嵌套的函数结构,保证更接近真实的函数。

残差块:串联一个层改变函数类,相当于扩大函数类,加入快速通道来得到相应的结构,类似于一个跳跃的操作。残差块的优点就是使得很深的网络更加容易训练,输入可通过跨层更快地向前传播。

ResNet架构:类似于VGG,但是是替换成了ResNet块,残差快里首先是两个有相同输出通道的3*3卷积层,每个卷积层后面是一个批量规范化层和ReLU激活函数。通过跨层数据通路,跳过这两个卷积运算,将输入加在最后的ReLU激活函数之前。如果改变通道数的话,可以再引入一个1*1的卷积层将输入变换成需要的形状后再做相加运算。

import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l

class Residual(nn.Module): 
    def __init__(self, input_channels, num_channels,
                 use_1x1conv=False, strides=1):
        super().__init__()
        self.conv1 = nn.Conv2d(input_channels, num_channels,
                               kernel_size=3, padding=1, stride=strides)
        self.conv2 = nn.Conv2d(num_channels, num_channels,
                               kernel_size=3, padding=1)
        if use_1x1conv:
            self.conv3 = nn.Conv2d(input_channels, num_channels,
                                   kernel_size=1, stride=strides)
        else:
            self.conv3 = None
        self.bn1 = nn.BatchNorm2d(num_channels)
        self.bn2 = nn.BatchNorm2d(num_channels)

    def forward(self, X):
        Y = F.relu(self.bn1(self.conv1(X)))
        Y = self.bn2(self.conv2(Y))
        if self.conv3:
            X = self.conv3(X)
        Y += X
        return F.relu(Y)

blk = Residual(3,3)
X = torch.rand(4, 3, 6, 6)
Y = blk(X)

# 在增加输出通道数的同时,减半输出的高和宽
blk = Residual(3,6, use_1x1conv=True, strides=2)

#在输出通道数为64、步幅为2的7*7卷积层后,接步幅为2的3*3的最大汇聚层,ResNet每个卷积层后增加了批量规范化层
b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
                   nn.BatchNorm2d(64), nn.ReLU(),
                   nn.MaxPool2d(kernel_size=3, stride=2, padding=1))

def resnet_block(input_channels, num_channels, num_residuals,
                 first_block=False):
    blk = []
    for i in range(num_residuals):
        if i == 0 and not first_block:
            blk.append(Residual(input_channels, num_channels,
                                use_1x1conv=True, strides=2))
        else:
            blk.append(Residual(num_channels, num_channels))
    return blk

# 在ResNet加入所有残差块,每个模块使用2个残差块
b2 = nn.Sequential(*resnet_block(64, 64, 2, first_block=True))
b3 = nn.Sequential(*resnet_block(64, 128, 2))
b4 = nn.Sequential(*resnet_block(128, 256, 2))
b5 = nn.Sequential(*resnet_block(256, 512, 2))

# 在ResNet中加入全局平均汇聚层,以及全连接层输出
net = nn.Sequential(b1, b2, b3, b4, b5,
                    nn.AdaptiveAvgPool2d((1,1)),
                    nn.Flatten(), nn.Linear(512, 10))

你可能感兴趣的:(深度学习,深度学习,人工智能,python)