pytorch与TensorFlow卷积层实现及通道顺序对比

pytorch与TensorFlow卷积层实现对比

      • torch.nn.Conv2d
      • tensorflow.keras.layers.Conv2D

torch.nn.Conv2d

pytorch与TensorFlow卷积层实现及通道顺序对比_第1张图片

// 输入channel为3,输出channel为16,卷积核尺寸为5*5
torch.nn.Conv2d(3, 16, 5);
// 输入channel为16,输出channel为33,卷积核尺寸为3*3,步距为2
m = nn.Conv2d(16, 33, 3, stride=2)
卷积核尺寸为3*5,行步距为2,列步距为1,
m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))

m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
//input变量时pytorh中的tenor,该tensor 的通道顺序为【batch,channels,height,weight】.tensorflow中tensor 的通道顺序为【batch,height,weight,channels】
input = torch.randn(20, 16, 50, 100)
output = m(input)

tensorflow.keras.layers.Conv2D

input_shape = (4, 28, 28, 3)
x = tf.random.normal(input_shape)
//输出channels为2,卷积核尺寸为3*3
y = tf.keras.layers.Conv2D(2, 3, activation='relu', input_shape=input_shape[1:])(x)
print(y.shape)
结果是:(4,26,26,2)

你可能感兴趣的:(基础知识,tensorflow,深度学习,卷积神经网络,卷积,神经网络)