[二十二]深度学习Pytorch-正则化Regularization之dropout

0. 往期内容

[一]深度学习Pytorch-张量定义与张量创建

[二]深度学习Pytorch-张量的操作:拼接、切分、索引和变换

[三]深度学习Pytorch-张量数学运算

[四]深度学习Pytorch-线性回归

[五]深度学习Pytorch-计算图与动态图机制

[六]深度学习Pytorch-autograd与逻辑回归

[七]深度学习Pytorch-DataLoader与Dataset(含人民币二分类实战)

[八]深度学习Pytorch-图像预处理transforms

[九]深度学习Pytorch-transforms图像增强(剪裁、翻转、旋转)

[十]深度学习Pytorch-transforms图像操作及自定义方法

[十一]深度学习Pytorch-模型创建与nn.Module

[十二]深度学习Pytorch-模型容器与AlexNet构建

[十三]深度学习Pytorch-卷积层(1D/2D/3D卷积、卷积nn.Conv2d、转置卷积nn.ConvTranspose)

[十四]深度学习Pytorch-池化层、线性层、激活函数层

[十五]深度学习Pytorch-权值初始化

[十六]深度学习Pytorch-18种损失函数loss function

[十七]深度学习Pytorch-优化器Optimizer

[十八]深度学习Pytorch-学习率Learning Rate调整策略

[十九]深度学习Pytorch-可视化工具TensorBoard

[二十]深度学习Pytorch-Hook函数与CAM算法

[二十一]深度学习Pytorch-正则化Regularization之weight decay

[二十二]深度学习Pytorch-正则化Regularization之dropout

深度学习Pytorch-正则化Regularization之dropout

  • 0. 往期内容
  • 1. 随机失活dropout定义
  • 2. nn.Dropout(p=0.5)

1. 随机失活dropout定义

[二十二]深度学习Pytorch-正则化Regularization之dropout_第1张图片
[二十二]深度学习Pytorch-正则化Regularization之dropout_第2张图片

2. nn.Dropout(p=0.5)

nn.Dropout(p=0.5, inplace=False)

[二十二]深度学习Pytorch-正则化Regularization之dropout_第3张图片

训练时乘以1/(1-p),测试时就不需要乘以1-drop_prob了。
代码示例

# -*- coding:utf-8 -*-
"""
@file name  : dropout_layer.py
@brief      : dropout 使用实验
"""
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from tools.common_tools import set_seed
from torch.utils.tensorboard import SummaryWriter

# set_seed(1)  # 设置随机种子


class Net(nn.Module):
    def __init__(self, neural_num, d_prob=0.5):
        super(Net, self).__init__()

        self.linears = nn.Sequential(

            nn.Dropout(d_prob),
            nn.Linear(neural_num, 1, bias=False),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        return self.linears(x)

input_num = 10000
x = torch.ones((input_num, ), dtype=torch.float32)

net = Net(input_num, d_prob=0.5)
net.linears[1].weight.detach().fill_(1.) #权重设置为1

net.train()
y = net(x)
print("output in training mode", y)

net.eval()
y = net(x)
print("output in eval mode", y)

在这里插入图片描述
训练时,权重为1*(1-0.5)=2,神经元个数5000个左右(原来10000个,0.5概率dropout),所以输出为2*5000左右=10000左右
测试时,权重为1,神经元个数为10000,输出为1*10000=10000

# -*- coding:utf-8 -*-
"""
@file name  : dropout_layer.py
@brief      : dropout 使用实验
"""
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from tools.common_tools import set_seed
from torch.utils.tensorboard import SummaryWriter

# set_seed(1)  # 设置随机种子


class Net(nn.Module):
    def __init__(self, neural_num, d_prob=0.5):
        super(Net, self).__init__()

        self.linears = nn.Sequential(

            nn.Dropout(d_prob),
            nn.Linear(neural_num, 1, bias=False),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        return self.linears(x)

input_num = 10000
x = torch.ones((input_num, ), dtype=torch.float32)

net = Net(input_num, d_prob=0.5)
net.linears[1].weight.detach().fill_(1.) #权重设置为1

net.train()
y = net(x)
print("output in training mode", y)

net.eval()
y = net(x)
print("output in eval mode", y)

[二十二]深度学习Pytorch-正则化Regularization之dropout_第4张图片

你可能感兴趣的:(深度学习Pyrotch,pytorch,深度学习,python,人工智能,机器学习)