对图像卷积操作后,接下来是先激活还是先池化,现在存在两种顺序:
先给出结论:
现在使用代码说明:
def maxpooling_order():
"""
最大池化顺序
:return:
"""
img = torch.randn(1, 1, 7, 7)
conv = nn.Conv2d(1, 1, 3, 1)
pool = nn.MaxPool2d(2)
relu = nn.ReLU()
conv_img = conv(img)
print("卷完后的结果:\n", conv_img)
# 先激活再池化
x1 = relu(conv_img)
x1 = pool(x1)
print("先激活再最大池化:\n", x1)
# 先池化再激活
x2 = pool(conv_img)
x2 = relu(x2)
print("先最大池化再激活:\n", x2)
"""
卷完后的结果:
tensor([[[[ 0.0489, -0.5657, -0.1745, 0.1998, 0.1667],
[ 0.0288, -0.4943, -0.5088, 0.6079, -0.2137],
[-0.6882, 0.2084, -0.9552, 0.4098, -0.0591],
[ 0.4386, -0.4203, 0.8329, -0.9906, 0.5458],
[ 1.2271, 0.1976, 0.0591, 1.4228, -2.0060]]]],
grad_fn=)
先激活再最大池化:
tensor([[[[0.0489, 0.6079],
[0.4386, 0.8329]]]], grad_fn=)
先最大池化再激活:
tensor([[[[0.0489, 0.6079],
[0.4386, 0.8329]]]], grad_fn=)
"""
结论:两种顺序的结果一致
def avgpooling_order():
"""
平均池化顺序
:return:
"""
img = torch.randn(1, 1, 7, 7)
conv = nn.Conv2d(1, 1, 3, 1)
pool = nn.AvgPool2d(2)
relu = nn.ReLU()
conv_img = conv(img)
print("卷完后的结果:\n", conv_img)
# 先激活再池化
x1 = relu(conv_img)
x1 = pool(x1)
print("先激活再平均池化:\n", x1)
# 先池化再激活
x2 = pool(conv_img)
x2 = relu(x2)
print("先平均池化再激活:\n", x2)
"""
卷完后的结果:
tensor([[[[-0.5341, -0.2345, 0.8252, 0.1597, 0.7983],
[-1.5555, -0.1043, -0.1961, 0.1143, 0.7206],
[-0.6428, 0.2705, -2.1325, 0.1707, 0.9366],
[ 1.2081, 0.3856, -2.1711, 0.0173, -0.2482],
[ 0.8903, -0.6599, 0.9174, -0.4508, -1.2815]]]],
grad_fn=)
先激活再平均池化:
tensor([[[[0.0000, 0.2748],
[0.4661, 0.0470]]]], grad_fn=)
先平均池化再激活:
tensor([[[[0.0000, 0.2258],
[0.3054, 0.0000]]]], grad_fn=)
"""
结论:
先激活,保留的有效特征(不为零的值)更多
先平均池化,会有更多的特征值变为0,不利于网络学习