pytorch方法测试——卷积(二维)

测试代码:

 
  
import torch
import torch.nn as nn
m = nn.Conv2d(2, 2, 3, stride=2)
input = torch.randn(1, 2, 5, 7)
output = m(input)

print("输入图片(2张):")
print(input)
print("卷积的权重:")
print(m.weight)
print("卷积的偏重:")
print(m.bias)

print("二维卷积后的输出:")
print(output)
print("输出的尺度:")
print(output.size())

convBlockOne = 0
convBlockTwo = 0
for i in range(3):
    for j in range(3):
        # 第一个卷积核与图片对应相乘
        convBlockOne += m.weight[0][0][i][j] * input[0][0][i][j] \
                        + m.weight[0][1][i][j] * input[0][1][i][j]
        # 第二个卷积核与图片对应相乘
        convBlockTwo += m.weight[1][0][i][j] * input[0][0][i][j] \
                            + m.weight[1][1][i][j] * input[0][1][i][j]
convBlockOne += m.bias[0]
convBlockTwo += m.bias[1]
print("第一个卷积核的输出:")
print(convBlockOne)
print("第二个卷积核的输出:")
print(convBlockTwo)
输出为:
输入图片(2张):
tensor([[[[ 2.4427, -0.2766,  1.0519, -1.6580, -0.8700, -0.8712, -0.7140],
          [-1.1698, -2.2573,  0.3525, -0.4197,  1.2041, -0.2023,  1.9264],
          [-0.0254,  0.9521,  1.0125,  0.0290,  0.1366,  0.0254, -0.1338],
          [-0.5112, -0.7758, -1.6293, -0.6308,  1.3666, -0.4817, -1.2356],
          [ 0.4078,  0.9890, -1.4422, -0.1429, -0.1279,  0.0739, -1.3344]],


         [[ 0.1312,  0.8048, -0.1161,  0.2302, -0.9466,  0.2319, -0.6043],
          [ 0.1986, -1.4481, -0.1419,  1.9776,  0.2299,  0.1118, -0.7816],
          [ 1.2489, -0.6024, -0.2227,  1.0146,  0.2186, -2.1565,  0.2137],
          [ 0.4975,  0.4443,  1.5600, -0.5297, -0.1383, -0.2127, -0.2384],
          [-0.7814, -0.4293, -0.1300,  0.6533, -0.1616, -2.1529,  0.4245]]]])
卷积的权重:
Parameter containing:
tensor([[[[ 0.1036,  0.0479, -0.2199],
          [-0.0706,  0.1298,  0.2060],
          [-0.1952, -0.0824, -0.1373]],


         [[ 0.0859,  0.0842, -0.0924],
          [-0.1088, -0.1166,  0.2203],
          [ 0.0604, -0.0275,  0.1634]]],


        [[[-0.0427, -0.0766, -0.1260],
          [ 0.0904, -0.0593,  0.1716],
          [ 0.0350, -0.0175,  0.0726]],

         [[-0.0993, -0.1414, -0.0991],
          [-0.0991, -0.1124, -0.0041],
          [ 0.2299, -0.1311, -0.0510]]]])
卷积的偏重:
Parameter containing:
tensor([-0.0496, -0.1585])
二维卷积后的输出:
tensor([[[[-0.1300,  0.0479,  0.1419],
          [-0.3342,  0.5077, -0.2927]],


         [[ 0.1763,  0.0098,  0.8924],
          [-0.9849, -0.4986,  0.2118]]]])
输出的尺度:
torch.Size([1, 2, 2, 3])
第一个卷积核的输出:
tensor(-0.1300)
第二个卷积核的输出:
tensor(0.1763)

结论:

        这个大家基本都懂,就不多说了,只说一点,

        [[[ 0.1036,  0.0479, -0.2199],
          [-0.0706,  0.1298,  0.2060],
          [-0.1952, -0.0824, -0.1373]],


         [[ 0.0859,  0.0842, -0.0924],
          [-0.1088, -0.1166,  0.2203],
          [ 0.0604, -0.0275,  0.1634]]]
  为一个卷积核,而不是
        [[ 0.1036,  0.0479, -0.2199],
          [-0.0706,  0.1298,  0.2060],
          [-0.1952, -0.0824, -0.1373]]
为一个卷积核。



你可能感兴趣的:(pytorch)