PyTorch深度学习实践 Lecture11 CNN进阶篇


Author :Horizon Max

编程技巧篇:各种操作小结

机器视觉篇:会变魔术 OpenCV

深度学习篇:简单入门 PyTorch

神经网络篇:经典网络模型

算法篇:再忙也别忘了 LeetCode


视频链接:Lecture 11 Advanced_CNN
文档资料:

//Here is the link:
课件链接:https://pan.baidu.com/s/1vZ27gKp8Pl-qICn_p2PaSw
提取码:cxe4

文章目录

  • Advanced_CNN
    • 概述
      • GoogLeNet
      • Inception Module
      • Code
      • 运行结果
    • 附录:相关文档资料

Advanced_CNN

概述

GoogLeNet

GoogLeNet是google推出的基于 Inception Module 的深度神经网络模型,在2014年的ImageNet竞赛中夺得了冠军 :
PyTorch深度学习实践 Lecture11 CNN进阶篇_第1张图片

Inception Module

基本组成结构 有四个成分:1 * 1卷积,3 * 3卷积,5 * 5卷积,3 * 3最大池化。最后对四个成分运算结果进行通道上组合 ;

这是它的核心思想,通过多个卷积核 提取图像不同尺度的信息 ,最后进行融合,可以得到图像更好的表征
PyTorch深度学习实践 Lecture11 CNN进阶篇_第2张图片

这里出现了 1 X 1 的卷积核 ,通过下图应该就知道它的作用是什么了:

PyTorch深度学习实践 Lecture11 CNN进阶篇_第3张图片
上面的直接采用 5 X 5 卷积 ,下面的在 5 X 5 卷积 之前先使用了 1 X 1 卷积,它们之间 计算量 却相差了 10倍

随着数据量的增加,模型深度的加深,就会出现 维度诅咒 的问题,即 计算量呈指数趋势增长 !

这样做可以有效地避免复杂的参数和计算量 !

Inception Module :
PyTorch深度学习实践 Lecture11 CNN进阶篇_第4张图片



Code

from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
 

# 1 prepare dataset
batch_size = 64
train_dataset = datasets.MNIST(root='./data',
                               train=True,
                               transform=transforms.ToTensor(),
                               download=False)
 
test_dataset = datasets.MNIST(root='./data',
                              train=False,
                              transform=transforms.ToTensor(),
                              download=False)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                                           batch_size=batch_size,
                                           shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
                                          batch_size=batch_size,
                                          shuffle=False)


# 2 design model using class

class InceptionA(nn.Module):      # 构建 Inception Module
    def __init__(self, in_channels):
        super(InceptionA, self).__init__()
        self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch5x5_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch5x5_1_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)
 
        self.branch3x3db1_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch3x3db1_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1)
        self.branch3x3db1_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1)
 
        self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)
 
    def forward(self, x):
        branch1x1 = self.branch1x1(x)
 
        branch5x5 = self.branch5x5_1(x)
        branch5x5 = self.branch5x5_1_2(branch5x5)
 
        branch3x3db1 = self.branch3x3db1_1(x)
        branch3x3db1 = self.branch3x3db1_2(branch3x3db1)
        branch3x3db1 = self.branch3x3db1_3(branch3x3db1)
 
        branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
        branch_pool = self.branch_pool(branch_pool)
        outputs = [branch1x1, branch5x5, branch3x3db1, branch_pool]
        return torch.cat(outputs, 1)

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(88, 20, kernel_size=5)
 
        self.incept1 = InceptionA(in_channels=10)
        self.incept2 = InceptionA(in_channels=20)
 
        self.mp = nn.MaxPool2d(2)
        self.fc = nn.Linear(1408, 10)
 
    def forward(self, x):
        in_size = x.size(0)
        x = F.relu(self.mp(self.conv1(x)))
        x = self.incept1(x)
        x = F.relu(self.mp(self.conv2(x)))
        x = self.incept2(x)
 
        x = x.view(in_size, -1)  # flatten the tensor
        x = self.fc(x)
        return F.log_softmax(x)
        
model = Net()
model.cuda()


# 3 construct loss and optimizer

optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
 

# 4 training cycle

def train(epoch):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = Variable(data).cuda(), Variable(target).cuda()
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch_idx % 10 == 0:
            print('Train Epoch:{}[{}/{} ({:.0f}%)]\t Loss:{:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                       100. * batch_idx / len(train_loader), loss.data[0]))

def test():
    model.eval()
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data, target = Variable(data, volatile=True).cuda(), Variable(target).cuda()
        output = model(data)
        # sun up batch loss
        test_loss += F.nll_loss(output, target, size_average=False).data[0]
        # get the index of the max log-probability
        pred = torch.max(output.data, 1)[1].cuda()
        correct += pred.eq(target.data.view_as(pred)).sum()
 
    test_loss /= len(test_loader.dataset)
    print('\n Test set: Average loss:{:.4f},Accuracy:{}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))
 
if __name__ == '__main__':
    epoch_list = []
    acc_list = []

    for epoch in range(10):
        train(epoch)
        acc = test()
        epoch_list.append(epoch)
        acc_list.append(acc)

    plt.plot(epoch_list, acc_list)
    plt.ylabel('accuracy')
    plt.xlabel('epoch')
    plt.show()

运行结果

PyTorch深度学习实践 Lecture11 CNN进阶篇_第5张图片



附录:相关文档资料

PyTorch 官方文档: PyTorch Documentation
PyTorch 中文手册: PyTorch Handbook


《PyTorch深度学习实践》系列链接:

  Lecture01 Overview
  Lecture02 Linear_Model
  Lecture03 Gradient_Descent
  Lecture04 Back_Propagation
  Lecture05 Linear_Regression_with_PyTorch
  Lecture06 Logistic_Regression
  Lecture07 Multiple_Dimension_Input
  Lecture08 Dataset_and_Dataloader
  Lecture09 Softmax_Classifier
  Lecture10 Basic_CNN
  Lecture11 Advanced_CNN
  Lecture12 Basic_RNN
  Lecture13 RNN_Classifier

你可能感兴趣的:(简单入门,PyTorch,深度学习,机器学习,PyTorch,CNN)