torch.nn.dropout和torch.nn.dropout2d的区别

import torch
import torch.nn as nn
import torch.autograd as autograd

m = nn.Dropout(p=0.5)
n = nn.Dropout2d(p=0.5)
input = autograd.Variable(torch.randn(1, 2, 6, 3)) ## 对dim=1维进行随机置为0

print(m(input))
print('****************************************************')
print(n(input))

torch.nn.dropout和torch.nn.dropout2d的区别_第1张图片

下面的都是错误解释和错误示范,没有删除的原因是留下来进行对比,希望不要犯这类错误

# -*- coding: utf-8 -*-
import torch
import torch.nn as nn
import torch.autograd as autograd

m = nn.Dropout(p=0.5)
n = nn.Dropout2d(p=0.5)
input = autograd.Variable(torch.randn(2, 6, 3)) ## 对dim=1维进行随机置为0

print(m(input))
print('****************************************************')
print(n(input))

结果是:
torch.nn.dropout和torch.nn.dropout2d的区别_第2张图片
可以看到torch.nn.Dropout对所有元素中每个元素按照概率0.5更改为零, 绿色椭圆,
而torch.nn.Dropout2d是对每个通道按照概率0.5置为0, 红色方框内
注:我只是圈除了部分

你可能感兴趣的:(Pytorch)