Pytorch 实现下采样的方法(卷积和池化)

# 卷积核大小和下采样实现

标签(空格分隔): 深度学习
https://www.zhihu.com/question/307839910(卷积下采样和池化下采样的区别)
---

### 1.下采样的实现:maxpooling 或者 conv,k=2,s=2 实现下采样两倍(池化下采样比较粗暴,可能将有用的信息滤除掉,而卷积下采样过程控制了步进大小,信息融合较好)
```
        self.conv_downsampling = nn.Conv2d(3,3,kernel_size=2,stride=2)
        self.max_pooling = nn.MaxPool2d(kernel_size=2)
        输出:torch.Size([1, 3, 128, 128]) torch.Size([1, 3, 128, 128])

```

### 2.卷积核大小为3,5,7时,padding为1,2,3 时,图像卷积后大小不变(Pytorch默认conv的s=1,p=0)
```
        self.conv33 = nn.Conv2d(3,3,kernel_size=3,padding=1)
        self.conv55 = nn.Conv2d(3,3,kernel_size=5,padding=2)
        self.conv77 = nn.Conv2d(3,3,kernel_size=7,padding=3)
        输出:torch.Size([1, 3, 256, 256]) torch.Size([1, 3, 256, 256]) torch.Size([1, 3, 256, 256])

```
### 3.卷积核参数计算方法:卷积核大小(n*d*w) * inchanels * outchannels

## Code
```
import torch
from torch import  nn

class test(nn.Module):
    def __init__(self):
        super(test, self).__init__()
        self.conv_downsampling = nn.Conv2d(3,3,kernel_size=2,stride=2)
        self.max_pooling = nn.MaxPool2d(kernel_size=2)
        self.conv33 = nn.Conv2d(3,3,kernel_size=3,padding=1)
        self.conv55 = nn.Conv2d(3,3,kernel_size=5,padding=2)
        self.conv77 = nn.Conv2d(3,3,kernel_size=7,padding=3)
    def forward(self,x):
        out1 = self.conv_downsampling(x)
        out2 = self.max_pooling(x)
        out3 = self.conv33(x)
        out4 = self.conv55(x)
        out5 = self.conv77(x)
        print(out1.shape,out2.shape,\
              out3.shape,out4.shape,\
              out5.shape)

if __name__ == '__main__':
    data = torch.randn(1,3,256,256)
    test_func = test()
    test_func(data)

```

你可能感兴趣的:(python,pytorch)