PyTorch-神经网络工具箱nn

1.nn.Convv2d

  from PIL import Image
  from torchvision.transforms import ToTensor, ToPILImage
  to_tensor = ToTensor()  # img ->tensor
  to_pil = ToPILImage()  # tensor -> image
  ll = Image.open('imgs/lena.png')
  
  input = to_tensor(lena).unsqueeze(0) # 将batch size 设置为1
  
# conv = nn.Conv2d(in_channels, out_channels, \
            kernel_size, stride=1,padding=0, dilation=1, groups=1, bias=True))
 conv = nn.Conv2d(1,1,(3,3),1,bias=Flase)

2. AvgPool

 pool = nn.AvgPool2d(2,2)
 out   = pool( V(input) )

3. Linear 全连接

input = V(t.randn(2,3))
linear = nn.Linear(3,4)
h = linear(input)

4. 激活函数

relu = nn.ReLU(inplace=True)
output = relu(input)

ReLU函数有个inplace参数,如果设为True,它会把输出直接覆盖到输入中,这样可以节省内反向传播的梯度。但是只有少数的autograd

你可能感兴趣的:(PyTorch-神经网络工具箱nn)