目录
基于残差网络的手写体数字识别实验
模型构建
残差单元
残差网络的整体结构
没有残差连接的ResNet18
模型训练
模型评价
带残差连接的ResNet18
模型训练
模型评价
与高层API实现版本的对比实验
Pytorch torchvision.models
总结
ref
残差网络(Residual Network,ResNet)是在神经网络模型中给非线性层增加直连边的方式来缓解梯度消失问题,从而使训练深度神经网络变得更加容易。
在残差网络中,最基本的单位为残差单元。
假设为一个或多个神经层,残差单元在的输入和输出之间加上一个直连边。
不同于传统网络结构中让网络去逼近一个目标函数,在残差网络中,将目标函数拆为了两个部分:恒等函数和残差函数
,其中为可学习得参数。
一个典型的残差单元如下图示,由多个级联的卷积层和一个跨层的直连边组成。
一个残差网络通常有很多个残差单元堆叠而成。下面构建一个在计算机视觉中非常典型的残差网络:ResNet18,并重复上个实验中的手写体数字识别任务。
先构建ResNet18的残差单元,然后在组建完整的网络。
残差单元包裹的非线性层的输入和输出形状大小应该一致。如果一个卷积层的输入特征图和输出特征图的通道数不一致,则其输出与输入特征图无法直接相加。为了解决上述问题,我们可以使用1×1大小的卷积将输入特征图的通道数映射为与级联卷积输出特征图的一致通道数。
1×1卷积:与标准卷积完全一样,唯一的特殊点在于卷积核的尺寸是1×1,也就是不去考虑输入数据局部信息之间的关系,而把关注点放在不同通道间。通过使用1×1卷积,可以起到如下作用:
import torch
import torch.nn as nn
import torch.nn.functional as F
class ResBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1, use_residual=True):
"""
残差单元
输入:
- in_channels:输入通道数
- out_channels:输出通道数
- stride:残差单元的步长,通过调整残差单元中第一个卷积层的步长来控制
- use_residual:用于控制是否使用残差连接
"""
super(ResBlock, self).__init__()
self.stride = stride
self.use_residual = use_residual
# 第一个卷积层,卷积核大小为3×3,可以设置不同输出通道数以及步长
self.conv1 = nn.Conv2d(in_channels, out_channels, 3, padding=1, stride=self.stride, bias=False)
# 第二个卷积层,卷积核大小为3×3,不改变输入特征图的形状,步长为1
self.conv2 = nn.Conv2d(out_channels, out_channels, 3, padding=1, bias=False)
# 如果conv2的输出和此残差块的输入数据形状不一致,则use_1x1conv = True
# 当use_1x1conv = True,添加1个1x1的卷积作用在输入数据上,使其形状变成跟conv2一致
if in_channels != out_channels or stride != 1:
self.use_1x1conv = True
else:
self.use_1x1conv = False
# 当残差单元包裹的非线性层输入和输出通道数不一致时,需要用1×1卷积调整通道数后再进行相加运算
if self.use_1x1conv:
self.shortcut = nn.Conv2d(in_channels, out_channels, 1, stride=self.stride, bias=False)
# 每个卷积层后会接一个批量规范化层,批量规范化的内容在7.5.1中会进行详细介绍
self.bn1 = nn.BatchNorm2d(out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)
if self.use_1x1conv:
self.bn3 = nn.BatchNorm2d(out_channels)
def forward(self, inputs):
y = F.relu(self.bn1(self.conv1(inputs)))
y = self.bn2(self.conv2(y))
if self.use_residual:
if self.use_1x1conv: # 如果为真,对inputs进行1×1卷积,将形状调整成跟conv2的输出y一致
shortcut = self.shortcut(inputs)
shortcut = self.bn3(shortcut)
else: # 否则直接将inputs和conv2的输出y相加
shortcut = inputs
y = torch.add(shortcut, y)
out = F.relu(y)
return out
残差网络就是将很多个残差单元串联起来构成的一个非常深的网络。ResNet18 的网络结构如下图所示。
其中为了便于理解,可以将ResNet18网络划分为6个模块:
模型代码实现:
定义模块一
def make_first_module(in_channels):
# 模块一:7*7卷积、批量规范化、汇聚
m1 = nn.Sequential(nn.Conv2d(in_channels, 64, 7, stride=2, padding=3),
nn.BatchNorm2d(64), nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
return m1
定义模块二到模块五
def resnet_module(input_channels, out_channels, num_res_blocks, stride=1, use_residual=True):
blk = []
# 根据num_res_blocks,循环生成残差单元
for i in range(num_res_blocks):
if i == 0: # 创建模块中的第一个残差单元
blk.append(ResBlock(input_channels, out_channels,
stride=stride, use_residual=use_residual))
else: # 创建模块中的其他残差单元
blk.append(ResBlock(out_channels, out_channels, use_residual=use_residual))
return blk
封装模块二到模块五
def make_modules(use_residual):
# 模块二:包含两个残差单元,输入通道数为64,输出通道数为64,步长为1,特征图大小保持不变
m2 = nn.Sequential(*resnet_module(64, 64, 2, stride=1, use_residual=use_residual))
# 模块三:包含两个残差单元,输入通道数为64,输出通道数为128,步长为2,特征图大小缩小一半。
m3 = nn.Sequential(*resnet_module(64, 128, 2, stride=2, use_residual=use_residual))
# 模块四:包含两个残差单元,输入通道数为128,输出通道数为256,步长为2,特征图大小缩小一半。
m4 = nn.Sequential(*resnet_module(128, 256, 2, stride=2, use_residual=use_residual))
# 模块五:包含两个残差单元,输入通道数为256,输出通道数为512,步长为2,特征图大小缩小一半。
m5 = nn.Sequential(*resnet_module(256, 512, 2, stride=2, use_residual=use_residual))
return m2, m3, m4, m5
定义完整网络
class Model_ResNet18(nn.Module):
def __init__(self, in_channels=3, num_classes=10, use_residual=True):
super(Model_ResNet18,self).__init__()
m1 = make_first_module(in_channels)
m2, m3, m4, m5 = make_modules(use_residual)
# 封装模块一到模块6
self.net = nn.Sequential(m1, m2, m3, m4, m5,
# 模块六:汇聚层、全连接层
nn.AdaptiveAvgPool2d(1), nn.Flatten(), nn.Linear(512, num_classes) )
def forward(self, x):
return self.net(x)
为了验证残差连接的效果,先使用没有残差连接的ResNet18进行实验。
使用训练集和验证集进行模型训练,共训练5个epoch。在实验中,保存准确率最高的模型作为最佳模型。
from nndl1 import metric
# 固定随机种子
torch.manual_seed(102)
# 学习率大小
lr = 0.005
# 批次大小
batch_size = 64
# 加载数据
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dev_loader = DataLoader(dev_dataset, batch_size=batch_size)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
# 定义网络,不使用残差结构的深层网络
model = Model_ResNet18(in_channels=1, num_classes=10, use_residual=False)
# 定义优化器
optimizer = opti.SGD(model.parameters(), lr=lr)
# 实例化RunnerV3
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 启动训练
log_steps = 15
eval_steps = 15
runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,
eval_steps=eval_steps, save_path="best_model.pdparams")
执行结果:
[Train] epoch: 0/5, step: 0/160, loss: 2.34224
[Train] epoch: 0/5, step: 15/160, loss: 1.31700
C:\Users\PycharmProjects\pythonProject\py: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).numpy()
[Evaluate] dev score: 0.09500, dev loss: 2.30257
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.09500
[Train] epoch: 0/5, step: 30/160, loss: 0.66286
[Evaluate] dev score: 0.11500, dev loss: 2.29716
[Evaluate] best accuracy performence has been updated: 0.09500 --> 0.11500
[Train] epoch: 1/5, step: 45/160, loss: 0.39897
[Evaluate] dev score: 0.55000, dev loss: 1.51973
[Evaluate] best accuracy performence has been updated: 0.11500 --> 0.55000
[Train] epoch: 1/5, step: 60/160, loss: 0.31825
[Evaluate] dev score: 0.92500, dev loss: 0.40170
[Evaluate] best accuracy performence has been updated: 0.55000 --> 0.92500
[Train] epoch: 2/5, step: 75/160, loss: 0.12904
[Evaluate] dev score: 0.93500, dev loss: 0.22754
[Evaluate] best accuracy performence has been updated: 0.92500 --> 0.93500
[Train] epoch: 2/5, step: 90/160, loss: 0.13832
[Evaluate] dev score: 0.90500, dev loss: 0.27319
[Train] epoch: 3/5, step: 105/160, loss: 0.10297
[Evaluate] dev score: 0.93000, dev loss: 0.24763
[Train] epoch: 3/5, step: 120/160, loss: 0.12388
[Evaluate] dev score: 0.93000, dev loss: 0.18329
[Train] epoch: 4/5, step: 135/160, loss: 0.20459
[Evaluate] dev score: 0.91000, dev loss: 0.25516
[Train] epoch: 4/5, step: 150/160, loss: 0.09298
[Evaluate] dev score: 0.93500, dev loss: 0.18513
[Evaluate] dev score: 0.89500, dev loss: 0.30162
[Train] Training done!
可视化:
# 可视化
def plot(runner, fig_name):
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
train_items = runner.train_step_losses[::30]
train_steps = [x[0] for x in train_items]
train_losses = [x[1] for x in train_items]
plt.plot(train_steps, train_losses, color='#8E004D', label="Train loss")
if runner.dev_losses[0][0] != -1:
dev_steps = [x[0] for x in runner.dev_losses]
dev_losses = [x[1] for x in runner.dev_losses]
plt.plot(dev_steps, dev_losses, color='#E20079', linestyle='--', label="Dev loss")
# 绘制坐标轴和图例
plt.ylabel("loss", fontsize='x-large')
plt.xlabel("step", fontsize='x-large')
plt.legend(loc='upper right', fontsize='x-large')
plt.subplot(1, 2, 2)
# 绘制评价准确率变化曲线
if runner.dev_losses[0][0] != -1:
plt.plot(dev_steps, runner.dev_scores,
color='#E20079', linestyle="--", label="Dev accuracy")
else:
plt.plot(list(range(len(runner.dev_scores))), runner.dev_scores,
color='#E20079', linestyle="--", label="Dev accuracy")
# 绘制坐标轴和图例
plt.ylabel("score", fontsize='x-large')
plt.xlabel("step", fontsize='x-large')
plt.legend(loc='lower right', fontsize='x-large')
plt.savefig(fig_name)
plt.show()
使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率以及损失情况。
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
执行结果:
[Test] accuracy/loss: 0.9320/0.4327
从输出结果看,对比LeNet-5模型评价实验结果,网络层级加深后,训练效果不升反降。
使用带残差连接的ResNet18重复上面的实验
# 学习率大小
lr = 0.01
# 批次大小
batch_size = 64
# 加载数据
train_loader = data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dev_loader = data.DataLoader(dev_dataset, batch_size=batch_size)
test_loader = data.DataLoader(test_dataset, batch_size=batch_size)
# 定义网络,通过指定use_residual为True,使用残差结构的深层网络
model = Model_ResNet18(in_channels=1, num_classes=10, use_residual=True)
# 定义优化器
optimizer = opt.SGD(lr=lr, params=model.parameters())
# 实例化RunnerV3
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 启动训练
log_steps = 15
eval_steps = 15
runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,
eval_steps=eval_steps, save_path="best_model.pdparams")
# 可视化观察训练集与验证集的Loss变化情况
plot(runner, 'cnn-loss3.pdf')
执行结果:
[Train] epoch: 0/5, step: 0/160, loss: 2.46978
[Train] epoch: 0/5, step: 15/160, loss: 0.52145
[Evaluate] dev score: 0.19000, dev loss: 2.29718
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.19000
[Train] epoch: 0/5, step: 30/160, loss: 0.22503
[Evaluate] dev score: 0.39500, dev loss: 1.75715
[Evaluate] best accuracy performence has been updated: 0.19000 --> 0.39500
[Train] epoch: 1/5, step: 45/160, loss: 0.13266
[Evaluate] dev score: 0.90000, dev loss: 0.37835
[Evaluate] best accuracy performence has been updated: 0.39500 --> 0.90000
[Train] epoch: 1/5, step: 60/160, loss: 0.07993
[Evaluate] dev score: 0.90500, dev loss: 0.23769
[Evaluate] best accuracy performence has been updated: 0.90000 --> 0.90500
[Train] epoch: 2/5, step: 75/160, loss: 0.03920
[Evaluate] dev score: 0.94500, dev loss: 0.13020
[Evaluate] best accuracy performence has been updated: 0.90500 --> 0.94500
[Train] epoch: 2/5, step: 90/160, loss: 0.04129
[Evaluate] dev score: 0.95500, dev loss: 0.11184
[Evaluate] best accuracy performence has been updated: 0.94500 --> 0.95500
[Train] epoch: 3/5, step: 105/160, loss: 0.01144
[Evaluate] dev score: 0.95500, dev loss: 0.10348
[Train] epoch: 3/5, step: 120/160, loss: 0.00599
[Evaluate] dev score: 0.96500, dev loss: 0.09905
[Evaluate] best accuracy performence has been updated: 0.95500 --> 0.96500
[Train] epoch: 4/5, step: 135/160, loss: 0.00453
[Evaluate] dev score: 0.95500, dev loss: 0.09267
[Train] epoch: 4/5, step: 150/160, loss: 0.00763
[Evaluate] dev score: 0.95500, dev loss: 0.08276
[Evaluate] dev score: 0.95500, dev loss: 0.07131
[Train] Training done!
使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率以及损失情况。
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
[Test] accuracy/loss: 0.9800/0.0356
添加了残差连接后,模型收敛曲线更平滑。
从输出结果看,和不使用残差连接的ResNet相比,添加了残差连接后,模型效果有了一定的提升。
对于Reset18这种比较经典的图像分类网络,飞桨高层API中都为大家提供了实现好的版本,大家可以不再从头开始实现。这里为高层API版本的resnet18模型和自定义的resnet18模型赋予相同的权重,并使用相同的输入数据,观察输出结果是否一致。
from torchvision.models import resnet18
import warnings
#warnings.filterwarnings("ignore")
# 使用飞桨HAPI中实现的resnet18模型,该模型默认输入通道数为3,输出类别数1000
hapi_model = resnet18()
# 自定义的resnet18模型
model = Model_ResNet18(in_channels=3, num_classes=1000, use_residual=True)
# 获取网络的权重
params = hapi_model.state_dict()
# 用来保存参数名映射后的网络权重
new_params = {}
# 将参数名进行映射
for key in params:
if 'layer' in key:
if 'downsample.0' in key:
new_params['net.' + key[5:8] + '.shortcut' + key[-7:]] = params[key]
elif 'downsample.1' in key:
new_params['net.' + key[5:8] + '.shorcutt' + key[23:]] = params[key]
else:
new_params['net.' + key[5:]] = params[key]
elif 'conv1.weight' == key:
new_params['net.0.0.weight'] = params[key]
elif 'bn1' in key:
new_params['net.0.1' + key[3:]] = params[key]
elif 'fc' in key:
new_params['net.7' + key[2:]] = params[key]
# 将飞桨HAPI中实现的resnet18模型的权重参数赋予自定义的resnet18模型,保持两者一致
model.load_state_dict(new_params)
# 这里用np.random创建一个随机数组作为测试数据
inputs = np.random.randn(*[1,3,32,32])
inputs = inputs.astype('float32')
x = torch.as_tensor(inputs)
output = model(x)
hapi_out = hapi_model(x)
# 计算两个模型输出的差异
diff = output - hapi_out
# 取差异最大的值
max_diff = torch.max(diff)
print(max_diff)
'''
tensor(0., grad_fn=)
'''
权重一样事,高层API版本的resnet18模型和自定义的resnet18模型输出结果是一致的,也就说明两个模型的实现完全一样。
链接:
resnet18 — Torchvision 0.14 documentation
还有其中一个例子:
适用于张量图像
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import torch
import torchvision.transforms as T
from torchvision.io import read_image
plt.rcParams["savefig.bbox"] = 'tight'
torch.manual_seed(1)
def show(imgs):
fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
for i, img in enumerate(imgs):
img = T.ToPILImage()(img.to('cpu'))
axs[0, i].imshow(np.asarray(img))
axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
read_image()函数允许读取图像和 直接将其加载为张量
dog1 = read_image(str(Path('assets') / 'dog1.jpg'))
dog2 = read_image(str(Path('assets') / 'dog2.jpg'))
show([dog1, dog2])
其中也有很多网络模型
ResNet出现于2015年,在2015年ImageNet的分类任务上取得了第一名的好成绩,它的出现相对于较早的ALexNet,VGGNet,GoogleNet等特征提取网络来说具有着历史性的突破。
残差 网络ResNet它的基础就是残差块,比如下两种常见的残差块。
然后对单残差块进行处理,如加入汇聚也就是池化,批量化归一等各种操作。
ResNet沿用了VGG完整的3×3卷积层设计。 残差块里首先有2个有相同输出通道数的3×3卷积层。 每个卷积层后接一个批量规范化层和ReLU激活函数。 然后我们通过跨层数据通路,跳过这2个卷积运算,将输入直接加在最后的ReLU激活函数前。 这样的设计要求2个卷积层的输出与输入形状一样,从而使它们可以相加。 如果想改变通道数,就需要引入一个额外的1×1卷积层来将输入变换成需要的形状后再做相加运算。
在上几次次的前馈神经网络实验看出盲目的加深网络的深度不但不会提高检测的准确率,反而会使得网络难以训练,是因为在反向传播调整参数时会出现梯度爆炸或者梯度消失,使得网络的训练过程难以收敛,残差网络的出现则正好改变了现状,采用ResNet网络进行特征提取时,网络的深度可以得到极大的增加,从最初的十几层,增加到现在的151层,并且不会产生过拟合现象。上面实验中发现残差连接后,模型收敛曲线更平滑,模型的效果也会更好。
NNDL 实验5(上) - HBU_DAVID - 博客园
NNDL 实验5(上) - HBU_DAVID - 博客园
(1条消息) ResNet18网络的具体构成_daweq的博客-CSDN博客_resnet18网络结构
(1条消息) ResNet网络进化史_ymsamlx的博客-CSDN博客_resnet进化
6. 卷积神经网络 — 动手学深度学习 2.0.0-beta1 documentation
7. 现代卷积神经网络 — 动手学深度学习 2.0.0-beta1 documentation