nn.Conv2d卷积

学习torch框架中的卷积神经网络,对此进行记录

一、nn.Conv1d

一维的卷积能处理多维数据

  1. nn.Conv2d(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True))
    参数:
      in_channel: 输入数据的通道数,例RGB图片通道数为3;
      out_channel: 输出数据的通道数,这个根据模型调整;
      kennel_size: 卷积核大小,可以是int,或tuple;kennel_size=2,意味着卷积大小2, kennel_size=(2,3),意味着卷积在第一维度大小为2,在第二维度大小为3;
      stride:步长,默认为1,与kennel_size类似,stride=2,意味在所有维度步长为2, stride=(2,3),意味着在第一维度步长为2,意味着在第二维度步长为3;
      padding: 零填充
  2. 例子
import torch
import torch.nn as nn
import torch.nn.functional as F

x = torch.randn(10, 16, 30, 32, 34)
# batch, channel , height , width
print(x.shape)
class Net_1D(nn.Module):
    def __init__(self):
        super(Net_1D, self).__init__()
        self.layers = nn.Sequential(
            nn.Conv1d(in_channels=16, out_channels=16, kernel_size=(3, 2, 2), stride=(2, 2, 1), padding=[2,2,2]),
            nn.ReLU()
        )
    def forward(self, x):
        output = self.layers(x)
        log_probs = F.log_softmax(output, dim=1)
        return  log_probs

n = Net_1D()  # in_channel,out_channel,kennel,
print(n)
y = n(x)
print(y.shape)

结果:

torch.Size([10, 16, 30, 32, 34])
Net_1D(
  (layers): Sequential(
    (0): Conv1d(16, 16, kernel_size=(3, 2, 2), stride=(2, 2, 1), padding=[2, 2, 2])
    (1): ReLU()
  )
)
torch.Size([10, 16, 16, 18, 37])
  1. 卷积计算
    d = (d - kennel_size + 2 * padding) / stride + 1
    x = ([10,16,30,32,34]),其中第一维度:30,第一维度,第二维度:32,第三维度:34,对于卷积核长分别是;对于步长分别是第一维度:2,第二维度:,2,第三维度:1;对于padding分别是:第一维度:2,第二维度:,2,第三维度:2;
    d1 = (30 - 3 + 22)/ 2 +1 = 31/2 +1 = 15+1 =16
    d2 = (32 - 2 + 2
    2)/ 2 +1 = 34/2 +1 = 17+1 =18
    d3 = (34 - 2 + 2*2)/ 1 +1 = 36/1 +1 = 36+1 =37
    batch = 10, out_channel = 16

故:y = [10, 16, 16, 18, 37]

二、nn.Conv2d

二维卷积可以处理二维数据

  1. nn.Conv2d(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True))
    参数:
      in_channel: 输入数据的通道数,例RGB图片通道数为3;
      out_channel: 输出数据的通道数,这个根据模型调整;
      kennel_size: 卷积核大小,可以是int,或tuple;kennel_size=2,意味着卷积大小(2,2), kennel_size=(2,3),意味着卷积大小(2,3)即非正方形卷积
      stride:步长,默认为1,与kennel_size类似,stride=2,意味着步长上下左右扫描皆为2, stride=(2,3),左右扫描步长为2,上下为3;
      padding: 零填充
  2. 例子
    输入数据X[10,16,30,32],其分别代表:10组数据,通道数为16,高度为30,宽为32
import torch
import torch.nn as nn

x = torch.randn(10, 16, 30, 32) # batch, channel , height , width
print(x.shape)
m = nn.Conv2d(16, 33, (3, 2), (2,1))  # in_channel, out_channel ,kennel_size,stride
print(m)
y = m(x)
print(y.shape)

结果:

torch.Size([10, 16, 30, 32])
Conv2d(16, 33, kernel_size=(3, 2), stride=(2, 1))
torch.Size([10, 33, 14, 31])

3.卷积计算过程:
h/w = (h/w - kennel_size + 2padding) / stride + 1
x = ([10,16,30,32]),其中h=30,w=32,对于卷积核长分别是 h:3,w:2 ;对于步长分别是h:2,w:1;padding默认0;
h = (30 - 3 + 2
0)/ 2 +1 = 27/2 +1 = 13+1 =14
w =(32 - 2 + 2*0)/ 1 +1 = 30/1 +1 = 30+1 =31
batch = 10, out_channel = 33
故: y= ([10, 33, 14, 31])

你可能感兴趣的:(pytorch)