nn.Module创建模型

目录:

  1. 网络模型创建
  2. nn.Module属性

1、网络模型创建

使用nn.Module

nn.Module创建模型_第1张图片

LeNet网络模型结构:

nn.Module创建模型_第2张图片

nn.Module创建模型_第3张图片

nn.Module总结

  1. 一个module可以包含多个子module;
  2. 一个module相当于一个运算,必须实现forward()函数;
  3. 每个module都有8个字典管理它的属性;

Pytorch代码实现:

import torch.nn as  nn
import torch.functional as F

# LeNet网络构建
class LeNet(nn.Module):
    # 初始化
    def __init__(self,classes):
        # 继承nn.module的初始化
        super(LeNet,self).__init__()
        # 子模块创建
        # conv2d->convNd->module
        self.conv1 = nn.Conv2d(3,6,5)
        self.conv2 = nn.Conv2d(6,16,5)
        # 每一层的输出是下一层的输入
        # 对传入数据应用线性变换::数学表达就是:Y = XA^T + b
        self.fc1 = nn.Linear(16*5*5,120)
        self.fc2 = nn.Linear(120,84)
        self.fc3 = nn.Linear(84,classes)
        
    # 前向传播,也就是将前面创建的每层网络连在一起
    def forward(self,x):
        # 这里的每一层输出也是下一层的输入
        # out1--->out2-->out3-->.....-->out8逐层连接
        out1 = F.relu(self.conv1(x))
        out2 = F.max_pool2d(out1,2)
        out3 = F.relu(self.conv2(out2))
        out4 = F.max_pool2d(out3,2)
        out5 = out.view(out4.size(0),-1)
        out6 = F.relu(self.fc1(out5))
        out7 = F.relu(self.fc2(out6))
        out8 = self.fc3(out7)
        return out8
    
    # 权重初始化操作
    def initialize_weights(self):
        # 遍历每一个module模块
        for m in self.modules():
            # 判断是不是conv2d类
            if isinstance(m,nn.Conv2d):
                # 参数初始化方法
                nn.init.xavier_normal_(m.weight.data)
                # 参数不是0的话就清零操作
                if m.bias is not None:
                    m.bias.data.zero_()
            # isinstance函数是判断一个对象是否是一个已知的类型,类似type()
            # 批处理归一化
            elif isinstance(m,nn.BatchNorm2d):
                # 这里的对数据初始化,
                m.weight.data.fill_(1)
                m.bias.data.zero_()
            elif isinstance(m,nn.Linear):
                # 用从正态分布中得出的值填充输入张量;mean=0,std=0.1s
                nn.init.normal_(m.weight.data,0,0.1)
                # 偏置初始化为0
                m.bias.data.zero_()

除了上面的方法;还可以使用更简单的方法获得alexnet

# 构建Alexnet
alexnet = torchvision.models.AlexNet()
print(alexnet)
AlexNet(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
    (1): ReLU(inplace=True)
    (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (4): ReLU(inplace=True)
    (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (7): ReLU(inplace=True)
    (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (9): ReLU(inplace=True)
    (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
  (classifier): Sequential(
    (0): Dropout(p=0.5, inplace=False)
    (1): Linear(in_features=9216, out_features=4096, bias=True)
    (2): ReLU(inplace=True)
    (3): Dropout(p=0.5, inplace=False)
    (4): Linear(in_features=4096, out_features=4096, bias=True)
    (5): ReLU(inplace=True)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

你可能感兴趣的:(Pytorch,nn.Module,python,网络,pytorch,神经网络)