Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)

1.GoogLeNet:

蓝色的块是卷积,红色的块是池化,黄色的是softmax
Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第1张图片

减少代码冗余:函数/类
当网络结构复杂,对于类似或者相同的子结构,就可以把这个子结构(块)封装成一个类 

GoogleNet,常被用作基础主干网络,图中红色圈出的一个部分称为Inception块

2. Inception Module解析

  • 卷积核的大小:GoogleNet的下面这个块出发点是不知道多大的卷积核好用,那就在一个块里面把这几种卷积都用一下,然后把它们的结果摞到一起,如果3*3比较好用,自然而然3*3的权重就会比较大
  • 提供了几种候选的卷积神经网络的配置,然后在训练中自动的找到最优的卷积组合
  • Concatenate 把张量沿着通道拼接在一起(要保证四个通道w h 一样)
  • Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第2张图片

  • average pooling 均值池化,设置padding和stride 使得w h 不变

3. 1*1卷积核

1.信息聚合/融合

在上面的网络结构中,使用了多个1*1的卷积核,它的意义是信息的聚合

在这里插入图片描述

上图中,使用了1*1卷积核之后,所获得的结果矩阵融合了三个通道的信息。跨越不同通道相同位置元素的值(信息融合)。例如,在考试科目中,计算总分进行比较,也是一种信息聚合。

2.简化计算

1*1卷积核另外一个作用是简化计算量。在这里插入图片描述

如图的网络结构中,使用11卷积核之后,计算量减小到了原来的1/10,主要原因是1*1卷积核能够直接改变通道的数量。 

 192*28*28经过5*5卷积变成32*28*28需要经过5^2*28^2*192*32, 即5*5的矩阵和单通道里面25个像素进行乘法,所以做一次单通道卷积需要5²,假如用到padding,要对每一个元素进行运算,乘28*28,然后乘通道数192,运算做了32次才能得到输出通道,共120million次
降低运算量:卷积核大小*像素*输入通道*输出通道,1*1卷积变成16*28*28的张量,在通过5*5的卷积变成32*28*28,运算量:1^2*28*28*192*16+5*5*28*28*16*32=12million
有时候把1*1卷积称为network in network
 

4.inception module 的代码 

Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第3张图片

#第一个分支 池化分支branch——pool
#输出通道24,1*1卷积
self.branch_pool=nn.Conv2d(in_channels,24,kernel_size=1)
#forward
branch_pool=F.avg_pool2d(x,kernel_size=3,stride=1,padding=1)#可以保证average之后大小不变
branch_pool=self.branch_pool(branch_pool)#
 
 
#第二个分支branch1x1,1*1分支,16通道,kernel size=1
self.branch1x1=nn.Conv2d(in_channels,16,kernel_size=1)
branch1x1=self.branch1x1(x)
 
#第三个分支branch5x5分为两个模块, 1*1卷积,5*5,保证图片大小不变padding设成2,通道数输出16、24
self.branch5x5_1=nn.Conv2d(in_channels,16,kernel_size=1)
self.branch5x5_2=nn.Conv2d(16,24,kernel_size=5,padding=2)
 
branch5x5=self.branch5x5_1(x)
branch5x5=self.branch5x5_2(branch5x5)
 
#第四个分支branch3x3,分成三块,一个1x1,3x3,3x3
self.branch3x3_1=nn.Conv2d(in_channels,16,kernel_size=1)
self.branch3x3_2=nn.Conv2d(16,24,kernel_size=3,padding=1)
self.branch3x3_3=nn.Conv2d(24,24,kernel_size=3,padding=1)
 
branch3x3=self.branch3x3_1(x)
branch3x3=self.branch3x3_2(branch3x3)
branch3x3=self.branch3x3_3(branch3x3)
 
 
 
 

沿着通道数把四个块拼接

Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第4张图片

Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第5张图片

#把前面的四个分支放到一个列表里,然后用cat沿着dim=1即batch——size把他们拼起来 (batch——size channel wideth height )
outputs=[branch1x1,branch5x5,branch3x3,branch_pool]
return torch.cat(outputs.dim=1)

Iception全部代码:

#inception block 
class InceptionA(nn.Module):
    def __init__(self,in_channels):
        super(InceptionA,self).__init__()
        #in_channels没有写具体的数字为了实例化的时候可以指定输入通道数
        #第一层
        self.branch_pool=nn.Conv2d(in_channels,16,kernel_size=1)
        # 第二层
        self.branch5x5_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)
        # 第三层
        self.branch3x3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch3x3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1)
        self.branch3x3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1)
        #第四层
        self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
 
    def forward(self,x):
        #第一层
        branch1x1 = self.branch1x1(x)
        #第二层
        branch5x5 = self.branch5x5_1(x)
        branch5x5 = self.branch5x5_2(branch5x5)
        # 第三层
        branch3x3 = self.branch3x3_1(x)
        branch3x3 = self.branch3x3_2(branch3x3)
        branch3x3 = self.branch3x3_3(branch3x3)
        # 第四层
        branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)  # 可以保证average之后大小不变
        branch_pool = self.branch_pool(branch_pool)
        #cat拼接
        outputs = [branch1x1, branch5x5, branch3x3, branch_pool]
        return torch.cat(outputs.dim=1)
  • 用inception写网络
#构造网络
class Net(nn.Module):
    def __init__(self):
        self.conv1=nn.Conv2d(1,10,kernel_size=5)
        #88从inception来,将88降到20
        self.conv2 = nn.Conv2d(88,20,kernel_size=5)
 
        self.incep1=InceptionA(in_channels=10)
        self.incep2=InceptionA(in_channels=20)
        #maxpooling 和全连接
        self.mp=nn.MaxPool2d(2)
        self.fc=nn.Linear(1408,10)#incep2这一层输出每张图像包含1408个元素
 
    def forward(self,x):
        in_size=x.size(0)
        #卷积 池化 relu,conv1之后输入通道变成10个
        x=F.relu(self.mp(self.conv1(x)))
        #incption后输出通道24+16+24+24=88个
        x=self.incep1(x)
        #conv2之后输出20
        x=F.relu(self.mp(self.conv2(x)))
        #输出88通道
        x = self.incep2(x)
        #变成向量
        x=x.view(in_size,-1)
        #全连接
        x=self.fc(x)
        return x

 Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第6张图片

完整代码1: 

import torch.nn as nn
import torch
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

batch_size = 64
transform = transforms.Compose([transforms.ToTensor(),
                                transforms.Normalize((0.1307,), (0.3081,))])


transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, transform=transform, download=True)
train_loader = DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, transform=transform, download=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)


class InceptionA(torch.nn.Module):
    def __init__(self, in_channels):
        super(InceptionA, self).__init__()
        self.branch_pool = torch.nn.Conv2d(in_channels, 24, kernel_size=1)

        self.branch1x1 = torch.nn.Conv2d(in_channels, 16, kernel_size=1)

        self.branch5x5_1 = torch.nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch5x5_2 = torch.nn.Conv2d(16, 24, kernel_size=5, padding=2)

        self.branch3x3_1 = torch.nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch3x3_2 = torch.nn.Conv2d(16, 24, kernel_size=3, padding=1)
        self.branch3x3_3 = torch.nn.Conv2d(24, 24, kernel_size=3, padding=1)

    def forward(self, x):
        branch1x1 = self.branch1x1(x)

        branch5x5 = self.branch5x5_2(self.branch5x5_1(x))

        branch3x3 = self.branch3x3_3(self.branch3x3_2(self.branch3x3_1(x)))

        branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
        branch_pool = self.branch_pool(branch_pool)

        outputs = [branch1x1, branch3x3, branch5x5, branch_pool]
        return torch.cat(outputs, dim=1)


class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = torch.nn.Conv2d(88, 20, kernel_size=5)

        self.incep1 = InceptionA(in_channels=10)
        self.incep2 = InceptionA(in_channels=20)

        self.mp = nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(1408, 10)

    def forward(self, x):
        in_size = x.size(0)
        x = F.relu(self.mp(self.conv1(x)))
        x = self.incep1(x)
        x = F.relu(self.mp(self.conv2(x)))
        x = self.incep2(x)
        x = x.view(in_size, -1)
        x = self.fc(x)
        return x


net = Net()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01)


def train(epoch):
    running_loss = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, targets = data
        inputs, targets = inputs.to(device), targets.to(device)
        optimizer.zero_grad()
        # forward
        y_pred = net(inputs)
        # backward
        loss = criterion(y_pred, targets)
        loss.backward()
        # update
        optimizer.step()

        running_loss += loss.item()
        if (batch_idx % 300 == 299):
            print("[%d,%d]loss:%.3f" % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0


accuracy = []


def test():
    correct = 0
    total = 0

    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = net(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (labels == predicted).sum().item()

        print("accuracy on test set:%d %% [%d/%d]" % (100 * correct / total, correct, total))
        accuracy.append(100 * correct / total)


if __name__ == "__main__":
    for epoch in range(10):
        train(epoch)
        test()
    plt.plot(range(10), accuracy)
    plt.xlabel("epoch")
    plt.ylabel("accuracy")
    plt.grid()
    plt.show()
    print("done")

运行结果:

Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第7张图片Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第8张图片 

5.ResidualNet残差 

梯度消失:因为做的是反向传播,所以要用链式法则把一连串的梯度乘起来,假如每一处的梯度都小于1,不断乘以小于1的值,这个值就会越来越小,趋近于0,w=w-ag;当梯度(g)趋近于0,权重就得不到更新,里输入比较近的这一块没办法得到充分的训练在这里插入图片描述

解决方法:逐层训练,Residual net 

Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第9张图片

 残差块实现:Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第10张图片

 即:

class ResidualBlock(nn.Module):
    #Residual Block需要保证输出和输入通道数x一样
    def __init__(self,channels):
        super(ResidualBlock,self).__init__()
        self.channels=channels
        #3*3卷积核,保证图像大小不变将padding设为1
        #第一个卷积
        self.conv1=nn.Conv2d(channels,channels,
                             kernel_size=3,padding=1)
        #第二个卷积
        self.conv2=nn.Conv2d(channels,channels,
                             kernel_size=3,padding=1)
 
    def forward(self,x):
        #激活
        y=F.relu(self.conv1(x))
        y=self.conv2(y)
        #先求和 后激活
        return F.relu(x+y)

Pytorch深度学习实践(b站刘二大人)P11讲 (CNN卷积神经网络高级篇)_第11张图片

ResidualNet完整代码2

import torch.nn as nn
import torch
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

batch_size = 64
transform = transforms.Compose([transforms.ToTensor(),
                                transforms.Normalize((0.1307,), (0.3081,))])


transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, transform=transform, download=True)
train_loader = DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, transform=transform, download=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)





class ResidualBlock(nn.Module):
    # Residual Block需要保证输出和输入通道数x一样
    def __init__(self, channels):
        super(ResidualBlock, self).__init__()
        self.channels = channels
        # 3*3卷积核,保证图像大小不变将padding设为1
        # 第一个卷积
        self.conv1 = nn.Conv2d(channels, channels,
                               kernel_size=3, padding=1)
        # 第二个卷积
        self.conv2 = nn.Conv2d(channels, channels,
                               kernel_size=3, padding=1)

    def forward(self, x):
        # 激活
        y = F.relu(self.conv1(x))
        y = self.conv2(y)
        # 先求和 后激活
        return F.relu(x + y)

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 16, kernel_size=5)
        self.conv2 = torch.nn.Conv2d(16, 32, kernel_size=5)
        self.mp = nn.MaxPool2d(2)

        self.rblock1 = ResidualBlock(16)
        self.rblock2 = ResidualBlock(32)


        self.fc = torch.nn.Linear(512, 10)

    def forward(self, x):
        in_size = x.size(0)
        x = F.relu(self.mp(self.conv1(x)))
        x = self.rblock1(x)
        x = F.relu(self.mp(self.conv2(x)))
        x = self.rblock2(x)
        x = x.view(in_size, -1)
        x = self.fc(x)
        return x


net = Net()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01)


def train(epoch):
    running_loss = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, targets = data
        inputs, targets = inputs.to(device), targets.to(device)
        optimizer.zero_grad()
        # forward
        y_pred = net(inputs)
        # backward
        loss = criterion(y_pred, targets)
        loss.backward()
        # update
        optimizer.step()

        running_loss += loss.item()
        if (batch_idx % 300 == 299):
            print("[%d,%d]loss:%.3f" % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0


accuracy = []


def test():
    correct = 0
    total = 0

    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = net(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (labels == predicted).sum().item()

        print("accuracy on test set:%d %% [%d/%d]" % (100 * correct / total, correct, total))
        accuracy.append(100 * correct / total)


if __name__ == "__main__":
    for epoch in range(10):
        train(epoch)
        test()
    plt.plot(range(10), accuracy)
    plt.xlabel("epoch")
    plt.ylabel("accuracy")
    plt.grid()
    plt.show()
    print("done")

参考链接

你可能感兴趣的:(pytorch,深度学习,cnn)