卷积神经网络笔记

卷积神经网络笔记

  • 1、torch.nn.BatchNorm2d() 详解
    • ①原理介绍
    • ②代码实现
    • ③参数详解

1、torch.nn.BatchNorm2d() 详解

①原理介绍

官网链接: https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html

torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
在四维输入上应用批量归一化
 - input: (N,C,H,W)    N: 批量batch大小,C:图片通道数,H&W: 2D图片
 - Output: (N, C, H, W)	与输入的一样

在这里插入图片描述

即,将原本的输入x,通过小批量的数据上的均值和标准差进行归一化。这里的均值和标准差是在N个数据上,对于每一个维度C进行计算的。
图解如下:

以下图片来自于:https://blog.csdn.net/qq_39777550/article/details/108038677

图片来自于https://blog.csdn.net/qq_39777550/article/details/108038677

②代码实现

代码实现:
import torch
import torch.nn as nn
m = nn.BatchNorm2d(2)
input = torch.tensor([
    [
        [
            [1,1],
            [1,2]

        ],
        [
            [-1,1],
            [0,1]
        ]
    ],
    [
        [
            [0,-1],
            [2,2]
        ],
        [
            [0,-1],
            [3,1]
        ]
    ]
]).to(torch.float32)
output = m(input)
print(output)
输出:
tensor([[[[ 0.0000,  0.0000],
          [ 0.0000,  1.0000]],

         [[-1.2247,  0.4082],
          [-0.4082,  0.4082]]],


        [[[-1.0000, -2.0000],
          [ 1.0000,  1.0000]],

         [[-0.4082, -1.2247],
          [ 2.0412,  0.4082]]]], grad_fn=)

③参数详解

  • num_features – C from an expected input of size (N, C, H, W)(N,C,H,W)
  • eps – a value added to the denominator for numerical stability. Default: 1e-5,防止出现分母为0的情况。
  • momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1
  • affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True,决定是否更新BatchNorm2d()中的参数。
  • track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics, and initializes statistics buffers running_mean and running_var as None. When these buffers are None, this module always uses batch statistics. in both training and eval modes. Default: True,决定是否跟踪统计整个数据集的均值与方差。如果值为true,那么在test阶段,用的应该就是整个训练集的均值和方差(常用,更加稳定);反之,测试阶段就只用每个测试集中batch个数据的均值和方差,这回使得其波动大,一般不予推荐!
track_running_stats 参数的理解
1、training=True, track_running_stats=True, 这是常用的training时期待的行为,running_mean 和running_var会跟踪不同batch数据的mean和variance,但是仍然是用每个batch的mean和variance做normalization。
2、training=True, track_running_stats=False, 这时候running_mean 和running_var不跟踪跨batch数据的statistics了,但仍然用每个batch的mean和variance做normalization。
3、training=False, track_running_stats=True, 这是我们期待的test时候的行为,即使用training阶段估计的running_mean和running_var. 
4、training=False, track_running_stats=False,同2(!!!).
作者:李韶华
链接:https://www.zhihu.com/question/282672547/answer/529154567
来源:知乎

你可能感兴趣的:(pytorch,cnn,深度学习,pytorch)