NNDL 实验六 卷积神经网络(4)ResNet18实现MNIST

5.4 基于残差网络的手写体数字识别实验

残差网络(Residual Network,ResNet)是在神经网络模型中给非线性层增加直连边的方式来缓解梯度消失问题,从而使训练深度神经网络变得更加容易。

在残差网络中,最基本的单位为残差单元。

5.4.1 模型构建

构建ResNet18的残差单元,然后在组建完整的网络。

5.4.1.1 残差单元

NNDL 实验六 卷积神经网络(4)ResNet18实现MNIST_第1张图片
图源:https://blog.csdn.net/weixin_44025103/article/details/126011432?
解释:残差单元为输出增加了一个直连边,这直接将第一层输入的偏导值增加了1.但是对于resnet18来说,这个值有点过大了,于是将这种直连形式分散到了整个网络中,从而有限的增大了前面几层的偏导数。缓解了由于网络层数过多带来的梯度消失问题。极端来想,将多个残差单元串联起来也会降低偏导数,当数目达到一定程度也会出现梯度消失问题,但是一般达不到那个数目。

残差单元包裹的非线性层的输入和输出形状大小应该一致。

如果一个卷积层的输入特征图和输出特征图的通道数不一致,则其输出与输入特征图无法直接相加。

可以使用1×1大小的卷积将输入特征图的通道数映射为与级联卷积输出特征图的一致通道数。

1×1卷积:与标准卷积完全一样,唯一的特殊点在于卷积核的尺寸是1×1,也就是不去考虑输入数据局部信息之间的关系,而把关注点放在不同通道间。

通过使用1×1卷积,可以起到如下作用:

实现信息的跨通道交互与整合。考虑到卷积运算的输入输出都是3个维度(宽、高、多通道),所以1×1卷积实际上就是对每个像素点,在不同的通道上进行线性组合,从而整合不同通道的信息;
对卷积核通道数进行降维和升维,减少参数量。经过1×1卷积后的输出保留了输入数据的原有平面结构,通过调控通道数,从而完成升维或降维的作用;
利用1×1卷积后的非线性激活函数,在保持特征图尺寸不变的前提下,大幅增加非线性。

5.4.1.2 残差网络的整体结构


图源:https://blog.csdn.net/ljfwz153076024/article/details/95063432?
上面是34层的resnet,
NNDL 实验六 卷积神经网络(4)ResNet18实现MNIST_第2张图片
图源:https://blog.csdn.net/weixin_51715088/article/details/127710992
我们用18层的。
没有残差的resnet18

class ResNet(torch.nn.Module):
    def __init__(self):
        super(ResNet, self).__init__()
        self.conv1=nn.Sequential(
            nn.Conv2d(in_channels=3,out_channels=64,kernel_size=(7,7),stride=(2,2),padding=0,bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True)
        )
        self.conv2=nn.Sequential(
            nn.MaxPool2d(kernel_size=(3,3),stride=2),
            self.__model(64,128)
        )
        self.conv3=self.__model(128,256)

        self.conv4=self.__model(256,512)

        self.conv5=self.__model(512,512)
        self.fc = nn.Linear(512, 10)


    def __model(self,in_channels,out_channels):
        '''避免重复代码'''
        return nn.Sequential(
            nn.Conv2d(in_channels, in_channels, kernel_size=(3, 3)),
            nn.BatchNorm2d(in_channels),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_channels, out_channels, kernel_size=(3, 3)),
            nn.BatchNorm2d(out_channels)
        )


    def forward(self,x):
        out = self.conv1(x)
        out = self.conv2(out)
        out = self.conv3(out)
        out = self.layer4(out)
        out = self.layer5(out)
        out = out.view(out.size(0), -1)
        out = self.fc(out)
        return out

有残差的resnet

import torch
import torch.nn as nn
import torch.nn.functional as F
class ResBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1, use_residual=True):
        super(ResBlock, self).__init__()
        self.stride = stride
        self.use_residual = use_residual
        self.conv1 = nn.Conv2d(in_channels, out_channels, 3, padding=1, stride=self.stride, bias=False)
        self.conv2 = nn.Conv2d(out_channels, out_channels, 3, padding=1, bias=False)

        if in_channels != out_channels or stride != 1:
            self.use_1x1conv = True
        else:
            self.use_1x1conv = False
        if self.use_1x1conv:
            self.shortcut = nn.Conv2d(in_channels, out_channels, 1, stride=self.stride, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.bn2 = nn.BatchNorm2d(out_channels)
        if self.use_1x1conv:
            self.bn3 = nn.BatchNorm2d(out_channels)
 
    def forward(self, inputs):
        y = F.relu(self.bn1(self.conv1(inputs)))
        y = self.bn2(self.conv2(y))
        if self.use_residual:
            if self.use_1x1conv:  
                shortcut = self.shortcut(inputs)
                shortcut = self.bn3(shortcut)
            else:  
                shortcut = inputs
            y = torch.add(shortcut, y)
        out = F.relu(y)
        return out

5.4.2 没有残差连接的ResNet18

先使用没有残差连接的ResNet18进行实验。

5.4.2.1 模型训练

if __name__=='__main__':
    train_set, dev_set, test_set = json.load(gzip.open('./mnist.json.gz'))
    train_images, train_labels = train_set[0][:1000], train_set[1][:1000]
    dev_images, dev_labels = dev_set[0][:200], dev_set[1][:200]
    test_images, test_labels = test_set[0][:200], test_set[1][:200]
    train_set, dev_set, test_set = [train_images, train_labels], [dev_images, dev_labels], [test_images, test_labels]

    # 数据预处理
    transforms = transforms.Compose(
        [transforms.Resize(32), transforms.ToTensor(), transforms.Normalize(mean=[0.5], std=[0.5])])


    class MNIST_dataset(Dataset):
        def __init__(self, dataset, transforms, mode='train'):
            self.mode = mode
            self.transforms = transforms
            self.dataset = dataset

        def __getitem__(self, idx):
            # 获取图像和标签
            image, label = self.dataset[0][idx], self.dataset[1][idx]
            image, label = np.array(image).astype('float32'), int(label)
            image = np.reshape(image, [28, 28])
            image = Image.fromarray(image.astype('uint8'), mode='L')
            image = self.transforms(image)

            return image, label

        def __len__(self):
            return len(self.dataset[0])


    # 加载 mnist 数据集
    train_dataset = MNIST_dataset(dataset=train_set, transforms=transforms, mode='train')
    test_dataset = MNIST_dataset(dataset=test_set, transforms=transforms, mode='test')
    dev_dataset = MNIST_dataset(dataset=dev_set, transforms=transforms, mode='dev')

    # 学习率大小
    lr = 0.005
    # 批次大小
    batch_size = 64
    # 加载数据
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    dev_loader = DataLoader(dev_dataset, batch_size=batch_size)
    test_loader = DataLoader(test_dataset, batch_size=batch_size)
    # 定义网络,不使用残差结构的深层网络
    model = ResNet()
    # 定义优化器
    optimizer = opt.SGD(model.parameters(), lr)
    loss_fn = F.cross_entropy
    # 定义评价指标
    metric = Accuracy()
    # 实例化RunnerV3
    runner = RunnerV3(model, optimizer, loss_fn, metric)
    # 启动训练
    log_steps = 15
    eval_steps = 15
    runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,
                 eval_steps=eval_steps, save_path="best_model.pdparams")

5.4.2.2 模型评价

[Train] epoch: 0/5, step: 0/160, loss: 2.34224
[Train] epoch: 0/5, step: 15/160, loss: 1.31700
[Evaluate]  dev score: 0.09500, dev loss: 2.30257
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.09500
[Train] epoch: 0/5, step: 30/160, loss: 0.66286
[Evaluate]  dev score: 0.11500, dev loss: 2.29716
[Evaluate] best accuracy performence has been updated: 0.09500 --> 0.11500
[Train] epoch: 1/5, step: 45/160, loss: 0.39897
[Evaluate]  dev score: 0.55000, dev loss: 1.51973
...
[Train] epoch: 3/5, step: 120/160, loss: 0.12388
[Evaluate]  dev score: 0.93000, dev loss: 0.18329
[Train] epoch: 4/5, step: 135/160, loss: 0.20459
[Evaluate]  dev score: 0.91000, dev loss: 0.25516
[Train] epoch: 4/5, step: 150/160, loss: 0.09298
[Evaluate]  dev score: 0.93500, dev loss: 0.18513
[Evaluate]  dev score: 0.89500, dev loss: 0.30162
[Train] Training done!

5.4.3 带残差连接的ResNet18

再使用带残差连接的ResNet18重复上面的实验。

5.4.3.1 模型训练

if __name__=='__main__':
    train_set, dev_set, test_set = json.load(gzip.open('./mnist.json.gz'))
    train_images, train_labels = train_set[0][:1000], train_set[1][:1000]
    dev_images, dev_labels = dev_set[0][:200], dev_set[1][:200]
    test_images, test_labels = test_set[0][:200], test_set[1][:200]
    train_set, dev_set, test_set = [train_images, train_labels], [dev_images, dev_labels], [test_images, test_labels]

    # 数据预处理
    transforms = transforms.Compose(
        [transforms.Resize(32), transforms.ToTensor(), transforms.Normalize(mean=[0.5], std=[0.5])])


    class MNIST_dataset(Dataset):
        def __init__(self, dataset, transforms, mode='train'):
            self.mode = mode
            self.transforms = transforms
            self.dataset = dataset

        def __getitem__(self, idx):
            # 获取图像和标签
            image, label = self.dataset[0][idx], self.dataset[1][idx]
            image, label = np.array(image).astype('float32'), int(label)
            image = np.reshape(image, [28, 28])
            image = Image.fromarray(image.astype('uint8'), mode='L')
            image = self.transforms(image)

            return image, label

        def __len__(self):
            return len(self.dataset[0])


    # 加载 mnist 数据集
    train_dataset = MNIST_dataset(dataset=train_set, transforms=transforms, mode='train')
    test_dataset = MNIST_dataset(dataset=test_set, transforms=transforms, mode='test')
    dev_dataset = MNIST_dataset(dataset=dev_set, transforms=transforms, mode='dev')
    train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
    dev_loader = DataLoader(dev_dataset, batch_size=batch_size)
    test_loader = DataLoader(test_dataset, batch_size=batch_size)
    lr = 0.001
    batch_size = 64
    # 定义网络,不使用残差结构的深层网络
    model = ResNet()
    # 定义优化器
    optimizer = opt.SGD(model.parameters(), lr)
    loss_fn = F.cross_entropy
    # 定义评价指标
    metric = Accuracy()
    # 实例化RunnerV3
    runner = RunnerV3(model, optimizer, loss_fn, metric)
    log_steps = 15
    eval_steps = 15
    runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,
                 eval_steps=eval_steps, save_path="best_model.pdparams")

5.4.3.2 模型评价

[Train] epoch: 0/5, step: 0/160, loss: 2.34224
[Train] epoch: 0/5, step: 15/160, loss: 1.31700
[Evaluate]  dev score: 0.09500, dev loss: 2.30257
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.09500
[Train] epoch: 0/5, step: 30/160, loss: 0.66286
[Evaluate]  dev score: 0.11500, dev loss: 2.29716
[Evaluate] best accuracy performence has been updated: 0.09500 --> 0.11500
...
[Train] epoch: 3/5, step: 105/160, loss: 0.10297
[Evaluate]  dev score: 0.93000, dev loss: 0.24763
[Train] epoch: 3/5, step: 120/160, loss: 0.12388
[Evaluate]  dev score: 0.93000, dev loss: 0.18329
[Train] epoch: 4/5, step: 135/160, loss: 0.20459
[Evaluate]  dev score: 0.91000, dev loss: 0.25516
[Train] epoch: 4/5, step: 150/160, loss: 0.09298
[Evaluate]  dev score: 0.93500, dev loss: 0.18513
[Evaluate]  dev score: 0.89500, dev loss: 0.30162
[Train] Training done.

5.4.4 与高层API实现版本的对比实验

飞桨高层 API是对飞桨API的进一步封装与升级,提供了更加简洁易用的API,进一步提升了飞桨的易学易用性。

其中,飞桨高层API封装了以下模块:

Model类,支持仅用几行代码完成模型的训练;
图像预处理模块,包含数十种数据处理函数,基本涵盖了常用的数据处理、数据增强方法;
计算机视觉领域和自然语言处理领域的常用模型,包括但不限于mobilenet、resnet、yolov3、cyclegan、bert、transformer、seq2seq等等,同时发布了对应模型的预训练模型,可以直接使用这些模型或者在此基础上完成二次开发。
飞桨高层 API主要包含在paddle.vision和paddle.text目录中。

对于Reset18这种比较经典的图像分类网络,飞桨高层API中都为大家提供了实现好的版本,大家可以不再从头开始实现。

这里为高层API版本的resnet18模型和自定义的resnet18模型赋予相同的权重,并使用相同的输入数据,观察输出结果是否一致。

import warnings
#warnings.filterwarnings("ignore")
 
# 使用飞桨HAPI中实现的resnet18模型,该模型默认输入通道数为3,输出类别数1000
hapi_model = resnet18(pretrained=True)
# 自定义的resnet18模型
model = Model_ResNet18(in_channels=3, num_classes=1000, use_residual=True)
 
# 获取网络的权重
params = hapi_model.state_dict()
# 用来保存参数名映射后的网络权重
new_params = {}
# 将参数名进行映射
for key in params:
    if 'layer' in key:
        if 'downsample.0' in key:
            new_params['net.' + key[5:8] + '.shortcut' + key[-7:]] = params[key]
        elif 'downsample.1' in key:
            new_params['net.' + key[5:8] + '.shorcutt' + key[23:]] = params[key]
        else:
            new_params['net.' + key[5:]] = params[key]
    elif 'conv1.weight' == key:
        new_params['net.0.0.weight'] = params[key]
    elif 'bn1' in key:
        new_params['net.0.1' + key[3:]] = params[key]
    elif 'fc' in key:
        new_params['net.7' + key[2:]] = params[key]
 
# 将飞桨HAPI中实现的resnet18模型的权重参数赋予自定义的resnet18模型,保持两者一致
 
del new_params[ "net.2.0.shorcutteight"]
del new_params["net.2.0.shorcuttias"]
del new_params["net.2.0.shorcuttunning_mean"]
del new_params["net.2.0.shorcuttunning_var"]
del new_params["net.2.0.shorcuttum_batches_tracked"]
del new_params["net.3.0.shorcutteight"]
del new_params["net.3.0.shorcuttias"]
del new_params["net.3.0.shorcuttunning_mean"]
del new_params["net.3.0.shorcuttunning_var"]
del new_params["net.3.0.shorcuttum_batches_tracked"]
del new_params["net.4.0.shorcutteight"]
del new_params["net.4.0.shorcuttias"]
del new_params["net.4.0.shorcuttunning_mean"]
del new_params["net.4.0.shorcuttunning_var"]
del new_params["net.4.0.shorcuttum_batches_tracked"]
 
 
#model.load_state_dict(torch.load("best_model.pdparams"))
#model.load_state_dict(new_params)
# 这里用np.random创建一个随机数组作为测试数据
inputs = np.random.randn(*[3,3,32,32])
inputs = inputs.astype('float32')
x = torch.tensor(inputs)
 
output = hapi_model(x)
hapi_out = hapi_model(x)
 
# 计算两个模型输出的差异
diff = output - hapi_out
# 取差异最大的值
max_diff = torch.max(diff)
print(max_diff.item())

Pytorch torchvision.models
resnet18 — Torchvision 0.13 documentation (pytorch.org)

学习并在实验中使用: torchvision.models.resnet18()

总结:这次实验使用了很多新的方法,并通过实验认识到了resnet的残差网络的工作流程。通过对比有无残差结构的resnet网络,了解到了其工作效果是怎样的。不过这次试验只用到了18层的网络,我觉得对比还不是特别明显,如果层数再多一些,比如增加到34或者更多,效果一定会更明显,更能清晰的认识到残差网络的改进效果。因为网络层数越多,其梯度消失性就更明显,就更能感受到残差网络带来的改变。有时间会尝试一下。

你可能感兴趣的:(cnn,深度学习,神经网络)