首先看外部初始化调用代码
# Change here to adapt to your data
# n_channels=3 for RGB images
# n_classes is the number of probabilities you want to get per pixel
# - For 1 class and background, use n_classes=1
# - For 2 classes, use n_classes=1
# - For N > 2 classes, use n_classes=N
net = UNet(n_channels=3, n_classes=1, bilinear=True)
入参有channels这里3代表rgb,如果是其他图片可能是2或者1或者9(例如点云,xyz物体坐标,rgb颜色,xyz在室内的位置)
n_classes是值mask的种类,如果是1种类别和背景或者class 1,2则n_classes=1;如果大于2则用n根据自己项目来定
bilinear只双线性,二次曲线类型。这里我们分类求解参数一般用线性,y=wx+b这是线性的,双线性,二次的则是y=w1*x²+w2*x+b类似的方程,比如上人工智能课程的时候老师都会给出很多点,让大家去拟合,可以一次线性拟合,也可以用二次拟合,根本的原理在泰勒展开。
好看类UNET的构造函数
class UNet(nn.Module):
def __init__(self, n_channels, n_classes, bilinear=True):
super(UNet, self).__init__()
self.n_channels = n_channels
self.n_classes = n_classes
self.bilinear = bilinear
self.inc = DoubleConv(n_channels, 64)
self.down1 = Down(64, 128)
self.down2 = Down(128, 256)
self.down3 = Down(256, 512)
factor = 2 if bilinear else 1
self.down4 = Down(512, 1024 // factor)
self.up1 = Up(1024, 512 // factor, bilinear)
self.up2 = Up(512, 256 // factor, bilinear)
self.up3 = Up(256, 128 // factor, bilinear)
self.up4 = Up(128, 64, bilinear)
self.outc = OutConv(64, n_classes)
可以看到Unet类继承自nn.Module。这也是为什么作者自己构建的类可以构建网络了。这里切下import的模块
import torch
import torch.nn as nn
import torch.nn.functional as F
__init__是python构建类的时候构造函数,这一点C++的同学可能比较疑惑。看多了就知道了
第一句supper是把子类提升为父类进行使用
第二~四句是将外部初始的channel,class bilinear传递到类里面
第五句参考window 学习pytorch unet代码之self.inc = DoubleConv(n_channels, 64)
第六句到第十句
从Down类名可以看出跟unet一样,down4次,都是先maxpolling一次,然后在conv两次下面看源码
class Down(nn.Module):
"""Downscaling with maxpool then double conv"""
def __init__(self, in_channels, out_channels):
super().__init__()
self.maxpool_conv = nn.Sequential(
nn.MaxPool2d(2),
DoubleConv(in_channels, out_channels)
)
def forward(self, x):
return self.maxpool_conv(x)
跟我们猜测一样,一句nn.MaxPool2d然后DoubleConv 参考window 学习pytorch unet代码之self.inc = DoubleConv(n_channels, 64)
down完毕之后就是四个up一个out
先看up
class Up(nn.Module):
"""Upscaling then double conv"""
def __init__(self, in_channels, out_channels, bilinear=True):
super().__init__()
# if bilinear, use the normal convolutions to reduce the number of channels
if bilinear:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
self.conv = DoubleConv(in_channels, out_channels, in_channels // 2)
else:
self.up = nn.ConvTranspose2d(in_channels , in_channels // 2, kernel_size=2, stride=2)
self.conv = DoubleConv(in_channels, out_channels)
可以看到up是先Upsample然后DoubleConv
最后out
class OutConv(nn.Module):
def __init__(self, in_channels, out_channels):
super(OutConv, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
从网络上看是一个conv2d即可
至此,net网络搭建完毕,可以看到pytorch比tensorflow搭建网络要简单很多,tensorflow会指定很多次decay,以及learningrate,但是pytorch按照net结构进行搭建即可。