残差网络(Residual Network,ResNet)是在神经网络模型中给非线性层增加直连边的方式来缓解梯度消失问题,从而使训练深度神经网络变得更加容易。
在残差网络中,最基本的单位为残差单元。
假设 f ( x ; θ ) f(\mathbf x;\theta) f(x;θ)为一个或多个神经层,残差单元在 f ( ) f() f()的输入和输出之间加上一个直连边。
区别:不同于传统网络结构中让网络 f ( x ; θ ) f(x;\theta) f(x;θ)去逼近一个目标函数 h ( x ) h(x) h(x),在残差网络中,将目标函数 h ( x ) h(x) h(x)拆为了两个部分:恒等函数 x x x和残差函数 h ( x ) − x h(x)-x h(x)−x
R e s B l o c k f ( x ) = f ( x ; θ ) + x , ( 5.22 ) \mathrm{ResBlock}_f(\mathbf x) = f(\mathbf x;\theta) + \mathbf x,(5.22) ResBlockf(x)=f(x;θ)+x,(5.22)
其中 θ \theta θ为可学习的参数。
一个典型的残差单元如图5.14所示,由多个级联的卷积层和一个跨层的直连边组成。
一个残差网络通常有很多个残差单元堆叠而成。下面我们来构建一个在计算机视觉中非常典型的残差网络:ResNet18,并重复上一节中的手写体数字识别任务。
在本节中,我们先构建ResNet18的残差单元,然后在组建完整的网络。
这里,我们实现一个算子ResBlock
来构建残差单元,其中定义了use_residual
参数,用于在后续实验中控制是否使用残差连接。
残差单元包裹的非线性层的输入和输出形状大小应该一致。如果一个卷积层的输入特征图和输出特征图的通道数不一致,则其输出与输入特征图无法直接相加。为了解决上述问题,我们可以使用 1 × 1 1 \times 1 1×1大小的卷积将输入特征图的通道数映射为与级联卷积输出特征图的一致通道数。
1 × 1 1 \times 1 1×1卷积:与标准卷积完全一样,唯一的特殊点在于卷积核的尺寸是 1 × 1 1 \times 1 1×1,也就是不去考虑输入数据局部信息之间的关系,而把关注点放在不同通道间。通过使用 1 × 1 1 \times 1 1×1卷积,可以起到如下作用:
class ResBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, use_residual=True):
"""
残差单元
输入:
- in_channels:输入通道数
- out_channels:输出通道数
- stride:残差单元的步长,通过调整残差单元中第一个卷积层的步长来控制
- use_residual:用于控制是否使用残差连接
"""
super(ResBlock, self).__init__()
self.stride = stride
self.use_residual = use_residual
# 第一个卷积层,卷积核大小为3×3,可以设置不同输出通道数以及步长
self.conv1 = nn.Conv2d(in_channels, out_channels, 3, padding=1, stride=self.stride, bias=False)
# 第二个卷积层,卷积核大小为3×3,不改变输入特征图的形状,步长为1
self.conv2 = nn.Conv2d(out_channels, out_channels, 3, padding=1, bias=False)
# 如果conv2的输出和此残差块的输入数据形状不一致,则use_1x1conv = True
# 当use_1x1conv = True,添加1个1x1的卷积作用在输入数据上,使其形状变成跟conv2一致
if in_channels != out_channels or stride != 1:
self.use_1x1conv = True
else:
self.use_1x1conv = False
# 当残差单元包裹的非线性层输入和输出通道数不一致时,需要用1×1卷积调整通道数后再进行相加运算
if self.use_1x1conv:
self.shortcut = nn.Conv2d(in_channels, out_channels, 1, stride=self.stride, bias=False)
# 每个卷积层后会接一个批量规范化层,批量规范化的内容在7.5.1中会进行详细介绍
self.bn1 = nn.BatchNorm2d(out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
if self.use_1x1conv:
self.bn3 = nn.BatchNorm2d(out_channels)
def forward(self, inputs):
y = F.relu(self.bn1(self.conv1(inputs)))
y = self.bn2(self.conv2(y))
if self.use_residual:
if self.use_1x1conv: # 如果为真,对inputs进行1×1卷积,将形状调整成跟conv2的输出y一致
shortcut = self.shortcut(inputs)
shortcut = self.bn3(shortcut)
else: # 否则直接将inputs和conv2的输出y相加
shortcut = inputs
y = torch.add(shortcut, y)
out = F.relu(y)
return out
残差网络就是将很多个残差单元串联起来构成的一个非常深的网络。ResNet18 的网络结构如图5.16所示。
其中为了便于理解,可以将ResNet18网络划分为6个模块:
ResNet18模型的代码实现如下:
定义模块一。
def make_first_module(in_channels):
# 模块一:7*7卷积、批量规范化、汇聚
m1 = nn.Sequential(nn.Conv2d(in_channels, 64, 7, stride=2, padding=3),
nn.BatchNorm2d(64), nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
return m1
定义模块二到模块五。
def resnet_module(input_channels, out_channels, num_res_blocks, stride=1, use_residual=True):
blk = []
# 根据num_res_blocks,循环生成残差单元
for i in range(num_res_blocks):
if i == 0: # 创建模块中的第一个残差单元
blk.append(ResBlock(input_channels, out_channels,
stride=stride, use_residual=use_residual))
else: # 创建模块中的其他残差单元
blk.append(ResBlock(out_channels, out_channels, use_residual=use_residual))
return blk
封装模块二到模块五。
def make_modules(use_residual):
# 模块二:包含两个残差单元,输入通道数为64,输出通道数为64,步长为1,特征图大小保持不变
m2 = nn.Sequential(*resnet_module(64, 64, 2, stride=1, use_residual=use_residual))
# 模块三:包含两个残差单元,输入通道数为64,输出通道数为128,步长为2,特征图大小缩小一半。
m3 = nn.Sequential(*resnet_module(64, 128, 2, stride=2, use_residual=use_residual))
# 模块四:包含两个残差单元,输入通道数为128,输出通道数为256,步长为2,特征图大小缩小一半。
m4 = nn.Sequential(*resnet_module(128, 256, 2, stride=2, use_residual=use_residual))
# 模块五:包含两个残差单元,输入通道数为256,输出通道数为512,步长为2,特征图大小缩小一半。
m5 = nn.Sequential(*resnet_module(256, 512, 2, stride=2, use_residual=use_residual))
return m2, m3, m4, m5
定义完整网络。
# 定义完整网络
class Model_ResNet18(nn.Layer):
def __init__(self, in_channels=3, num_classes=10, use_residual=True):
super(Model_ResNet18,self).__init__()
m1 = make_first_module(in_channels)
m2, m3, m4, m5 = make_modules(use_residual)
# 封装模块一到模块6
self.net = nn.Sequential(m1, m2, m3, m4, m5,
# 模块六:汇聚层、全连接层
nn.AdaptiveAvgPool2d(1), nn.Flatten(), nn.Linear(512, num_classes) )
def forward(self, x):
return self.net(x)
这里同样可以使用paddle.summary统计模型的参数量。
model = Model_ResNet18(in_channels=1, num_classes=10, use_residual=True)
params_info = summary(model,input_size=(1,64,32))
print(params_info)
'''
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 32, 16] 3,200
BatchNorm2d-2 [-1, 64, 32, 16] 128
ReLU-3 [-1, 64, 32, 16] 0
MaxPool2d-4 [-1, 64, 16, 8] 0
Conv2d-5 [-1, 64, 16, 8] 36,864
BatchNorm2d-6 [-1, 64, 16, 8] 128
Conv2d-7 [-1, 64, 16, 8] 36,864
BatchNorm2d-8 [-1, 64, 16, 8] 128
ResBlock-9 [-1, 64, 16, 8] 0
Conv2d-10 [-1, 64, 16, 8] 36,864
BatchNorm2d-11 [-1, 64, 16, 8] 128
Conv2d-12 [-1, 64, 16, 8] 36,864
BatchNorm2d-13 [-1, 64, 16, 8] 128
ResBlock-14 [-1, 64, 16, 8] 0
Conv2d-15 [-1, 128, 8, 4] 73,728
BatchNorm2d-16 [-1, 128, 8, 4] 256
Conv2d-17 [-1, 128, 8, 4] 147,456
BatchNorm2d-18 [-1, 128, 8, 4] 256
Conv2d-19 [-1, 128, 8, 4] 8,192
BatchNorm2d-20 [-1, 128, 8, 4] 256
ResBlock-21 [-1, 128, 8, 4] 0
Conv2d-22 [-1, 128, 8, 4] 147,456
BatchNorm2d-23 [-1, 128, 8, 4] 256
Conv2d-24 [-1, 128, 8, 4] 147,456
BatchNorm2d-25 [-1, 128, 8, 4] 256
ResBlock-26 [-1, 128, 8, 4] 0
Conv2d-27 [-1, 256, 4, 2] 294,912
BatchNorm2d-28 [-1, 256, 4, 2] 512
Conv2d-29 [-1, 256, 4, 2] 589,824
BatchNorm2d-30 [-1, 256, 4, 2] 512
Conv2d-31 [-1, 256, 4, 2] 32,768
BatchNorm2d-32 [-1, 256, 4, 2] 512
ResBlock-33 [-1, 256, 4, 2] 0
Conv2d-34 [-1, 256, 4, 2] 589,824
BatchNorm2d-35 [-1, 256, 4, 2] 512
Conv2d-36 [-1, 256, 4, 2] 589,824
BatchNorm2d-37 [-1, 256, 4, 2] 512
ResBlock-38 [-1, 256, 4, 2] 0
Conv2d-39 [-1, 512, 2, 1] 1,179,648
BatchNorm2d-40 [-1, 512, 2, 1] 1,024
Conv2d-41 [-1, 512, 2, 1] 2,359,296
BatchNorm2d-42 [-1, 512, 2, 1] 1,024
Conv2d-43 [-1, 512, 2, 1] 131,072
BatchNorm2d-44 [-1, 512, 2, 1] 1,024
ResBlock-45 [-1, 512, 2, 1] 0
Conv2d-46 [-1, 512, 2, 1] 2,359,296
BatchNorm2d-47 [-1, 512, 2, 1] 1,024
Conv2d-48 [-1, 512, 2, 1] 2,359,296
BatchNorm2d-49 [-1, 512, 2, 1] 1,024
ResBlock-50 [-1, 512, 2, 1] 0
AdaptiveAvgPool2d-51 [-1, 512, 1, 1] 0
Flatten-52 [-1, 512] 0
Linear-53 [-1, 10] 5,130
================================================================
Total params: 11,175,434
Trainable params: 11,175,434
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 2.10
Params size (MB): 42.63
Estimated Total Size (MB): 44.74
----------------------------------------------------------------
None
'''
使用thop
统计模型的计算量。
from thop import profile
FLOPs,PARAMs = profile(model,(1,1,32,32),report_missing=True)
print(FLOPs,PARAMs)
为了验证残差连接对深层卷积神经网络的训练可以起到促进作用,接下来先使用ResNet18(use_residual设置为False)进行手写数字识别实验,再添加残差连接(use_residual设置为True),观察实验对比效果。
为了验证残差连接的效果,先使用没有残差连接的ResNet18进行实验。
使用训练集和验证集进行模型训练,共训练5个epoch。在实验中,保存准确率最高的模型作为最佳模型。代码实现如下
from nndl import plot
# 学习率大小
lr = 0.1
# 批次大小
batch_size = 64
# 加载数据
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dev_loader = torch.utils.data.DataLoader(dev_dataset, batch_size=batch_size)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
# 定义LeNet网络
# 自定义算子实现的LeNet-5
model = Model_LeNet(in_channels=1, num_classes=10)
# 飞桨API实现的LeNet-5
# model = Paddle_LeNet(in_channels=1, num_classes=10)
# 定义优化器
optimizer = torch.optim.SGD(lr=lr, params=model.parameters())
# 定义损失函数
loss_fn = F.cross_entropy
# 定义评价指标
metric = metric.Accuracy(is_logist=True)
# 实例化RunnerV3
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 启动训练
log_steps = 15
eval_steps = 15
runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,
eval_steps=eval_steps, save_path="best_model.pdparams")
# 可视化观察训练集与验证集的Loss变化情况
plot_training_loss_acc(runner, 'cnn-loss2.pdf')
'''
结果如下
[Train] epoch: 0/5, step: 0/80, loss: 2.86621
[Train] epoch: 0/5, step: 15/80, loss: 1.90175
[Evaluate] dev score: 0.24000, dev loss: 2.21688
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.24000
[Train] epoch: 1/5, step: 30/80, loss: 0.96294
[Evaluate] dev score: 0.63000, dev loss: 1.48276
[Evaluate] best accuracy performence has been updated: 0.24000 --> 0.63000
[Train] epoch: 2/5, step: 45/80, loss: 0.60419
[Evaluate] dev score: 0.73000, dev loss: 0.86856
[Evaluate] best accuracy performence has been updated: 0.63000 --> 0.73000
[Train] epoch: 3/5, step: 60/80, loss: 0.29706
[Evaluate] dev score: 0.79500, dev loss: 0.67405
[Evaluate] best accuracy performence has been updated: 0.73000 --> 0.79500
[Train] epoch: 4/5, step: 75/80, loss: 0.12274
[Evaluate] dev score: 0.79500, dev loss: 0.60627
[Evaluate] dev score: 0.80000, dev loss: 0.61389
[Evaluate] best accuracy performence has been updated: 0.79500 --> 0.80000
[Train] Training done!
'''
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
'''
[Test] accuracy/loss: 0.8500/0.5529
'''
从输出结果看,对比LeNet-5模型评价实验结果,网络层级加深后,训练效果不升反降。
使用带残差连接的ResNet18重复上面的实验,代码实现如下:
# 学习率大小
lr = 0.1
# 批次大小
batch_size = 64
# 加载数据
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dev_loader = torch.utils.data.DataLoader(dev_dataset, batch_size=batch_size)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
# 定义LeNet网络
# 自定义算子实现的LeNet-5
model = Model_LeNet(in_channels=1, num_classes=10)
# 飞桨API实现的LeNet-5
# model = Paddle_LeNet(in_channels=1, num_classes=10)
# 定义优化器
optimizer = torch.optim.SGD(lr=lr, params=model.parameters())
# 定义损失函数
loss_fn = F.cross_entropy
# 定义评价指标
metric = metric.Accuracy(is_logist=True)
# 实例化RunnerV3
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 启动训练
log_steps = 15
eval_steps = 15
runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,
eval_steps=eval_steps, save_path="best_model.pdparams")
# 可视化观察训练集与验证集的Loss变化情况
plot_training_loss_acc(runner, 'cnn-loss3.pdf')
'''
[Train] epoch: 0/5, step: 0/80, loss: 3.52215
[Train] epoch: 0/5, step: 15/80, loss: 0.60491
[Evaluate] dev score: 0.55500, dev loss: 1.35420
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.55500
[Train] epoch: 1/5, step: 30/80, loss: 0.13206
[Evaluate] dev score: 0.85000, dev loss: 0.48716
[Evaluate] best accuracy performence has been updated: 0.55500 --> 0.85000
[Train] epoch: 2/5, step: 45/80, loss: 0.03578
[Evaluate] dev score: 0.85500, dev loss: 0.40347
[Evaluate] best accuracy performence has been updated: 0.85000 --> 0.85500
[Train] epoch: 3/5, step: 60/80, loss: 0.01271
[Evaluate] dev score: 0.85500, dev loss: 0.38605
[Train] epoch: 4/5, step: 75/80, loss: 0.01311
[Evaluate] dev score: 0.86500, dev loss: 0.37689
[Evaluate] best accuracy performence has been updated: 0.85500 --> 0.86500
[Evaluate] dev score: 0.85500, dev loss: 0.38186
[Train] Training done!
'''
使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率以及损失情况。
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
'''
[Test] accuracy/loss: 0.8750/0.3666
'''
添加了残差连接后,模型收敛曲线更平滑。
从输出结果看,和不使用残差连接的ResNet相比,添加了残差连接后,模型效果有了一定的提升。
1、观察二者的误差和准确率情况,发现加了残差连接的resnet误差下降的更快,同时得分也更高。
2、为什么使用残差?残差的发展。
首先大家已经形成了一个通识,在一定程度上,网络越深表达能力越强,性能越好。
不过,好是好了,随着网络深度的增加,带来了许多问题,梯度消散,梯度爆炸;在resnet出来之前大家没想办法去解决吗?当然不是。更好的优化方法,更好的初始化策略,BN层,Relu等各种激活函数,都被用过了,但是仍然不够,改善问题的能力有限,直到残差连接被广泛使用。
深度学习中反向传播进行参数更新;
f ( x ) ′ = f ( x , w f ) f(x)^{'}=f(x,w_{f}) f(x)′=f(x,wf)
g ′ = g ( f ′ ) g^{'}=g(f^{'}) g′=g(f′)
y ′ = k ( g ′ ) y^{'}=k(g^{'}) y′=k(g′)
花费 = 误差函数 ( y , y ′ ) 花费=误差函数(y,y^{'}) 花费=误差函数(y,y′)
求花费——cost对f的导数如下:
d ( f ′ ) d ( w f ) × d ( g ′ ) d ( f ′ ) × d ( y ′ ) d ( g ′ ) × d ( c o s t ) d ( y ′ ) \frac{d(f^{'})}{d(w_{f})} \times \frac{d(g^{'})}{d(f^{'})}\times \frac{d(y^{'})}{d(g^{'})}\times \frac{d(cost)}{d(y^{'})} d(wf)d(f′)×d(f′)d(g′)×d(g′)d(y′)×d(y′)d(cost)
一旦其中某一个导数很小,多次连乘后梯度可能越来越小,这就是常说的梯度消散,对于深层网络,传到浅层几乎就没了。但是如果使用了残差,每一个导数就加上了一个恒等项1, d h / d x = d ( f + x ) / d x = 1 + d f / d x dh/dx=d(f+x)/dx=1+df/dx dh/dx=d(f+x)/dx=1+df/dx。此时就算原来的导数df/dx很小,这时候误差仍然能够有效的反向传播,这就是核心思想。
对于Reset18这种比较经典的图像分类网络,飞桨高层API中都为大家提供了实现好的版本,大家可以不再从头开始实现。这里为高层API版本的resnet18模型和自定义的resnet18模型赋予相同的权重,并使用相同的输入数据,观察输出结果是否一致。
from torchvision.models import resnet18
import warnings
#warnings.filterwarnings("ignore")
# 使用飞桨HAPI中实现的resnet18模型,该模型默认输入通道数为3,输出类别数1000
hapi_model = resnet18()
# 自定义的resnet18模型
model = Model_ResNet18(in_channels=3, num_classes=1000, use_residual=True)
# 获取网络的权重
params = hapi_model.state_dict()
# 用来保存参数名映射后的网络权重
new_params = {}
# 将参数名进行映射
for key in params:
if 'layer' in key:
if 'downsample.0' in key:
new_params['net.' + key[5:8] + '.shortcut' + key[-7:]] = params[key]
elif 'downsample.1' in key:
new_params['net.' + key[5:8] + '.shorcutt' + key[23:]] = params[key]
else:
new_params['net.' + key[5:]] = params[key]
elif 'conv1.weight' == key:
new_params['net.0.0.weight'] = params[key]
elif 'bn1' in key:
new_params['net.0.1' + key[3:]] = params[key]
elif 'fc' in key:
new_params['net.7' + key[2:]] = params[key]
# 将飞桨HAPI中实现的resnet18模型的权重参数赋予自定义的resnet18模型,保持两者一致
model.load_state_dict(new_params)
# 这里用np.random创建一个随机数组作为测试数据
inputs = np.random.randn(*[1,3,32,32])
inputs = inputs.astype('float32')
x = torch.as_tensor(inputs)
output = model(x)
hapi_out = hapi_model(x)
# 计算两个模型输出的差异
diff = output - hapi_out
# 取差异最大的值
max_diff = torch.max(diff)
print(max_diff)
'''
tensor(0., grad_fn=)
'''
通过结果,我们发现当权重一样后,自定义的resnet残差网络和torch框架API中的restnet残差网络并没有区别。
现成框架的使用相比于自定义代码还是很简单的。我们简单讲述一下。
官网链接
torchvision.model中封装了好多流行的网络模型,mobilenet,resnet等等。
经过查阅资料,大致分为三类:
经典网络
轻量化网络
自动神经结构搜索方法的网络
from torchvision.models import resnet18
resnet18 = resnet18(pretrained=True)
'''
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=512, out_features=1000, bias=True)
)
Process finished with exit code 0
'''
其中pretrained=true 表示声明的模型是已经预训练的模型。
## 加载参数
resnet18.state_dict()
## 加载自己训练参数
resnet18.load_state_dict()
修改模型
resnet18 .classifier[6] = nn.Linear(4096,10) # 修改对应层,编号相对应
新增模型
resnet18.add_module("add_linear",nn.Linear(1000,10)) # 在vgg16的classfier里加一层线性层
经过查阅资料,残差网络和普通网络的区别是,目标函数多加了一个x,因为为了追求网络模型的训练效果越来越好,网络层数越来越深,出现了过拟合、梯度消失等问题,为了避免梯度消失,所以增加了一个x,这样求导的结果为1, d h / d x = d ( f + x ) / d x = 1 + d f / d x dh/dx=d(f+x)/dx=1+df/dx dh/dx=d(f+x)/dx=1+df/dx。此时就算原来的导数 d f / d x df/dx df/dx很小,这时候误差仍然能够有效的反向传播。这是我结合各个资料进行的总结,同样在实验中,也证实了,带有残差连接的网络的确是比不带残差连接的网络训练效果要好。本次实验学习的torch框架封装的模型torchvision.models和sklearn框架很像。
残差连接
torchvision.models的使用
NNDL 实验六 卷积神经网络(4)ResNet18实现MNIST
torchvision.models官网