论文【U-Net: Convolutional Networks for Biomedical Image Segmentation】2015 MICCAI
UNet是为了解决医学图像分割问题而提出的CNN模型,谷歌学术目前引用已经达到20000左右,是非常有影响力的工作。
UNet沿袭了FCN中的编码器-解码器结构,并将浅层与深层信息的融合做的更加彻底。论文中网络结构图如下:
上图的左边就是编码器部分,这里通过不断进行卷积和池化的操作将输入图像的特征图尺寸不断变小,将特征图通道数不断增加。可以看到UNet作者采用了和VGG类似的策略,每次pooling特征图的边长减半,通道数变为原来的2倍。
上图的右边就是解码器部分,这里通过不断的卷积和反卷积将特征图尺寸不断增大,将特征图通道不断减小。可以看到编码器阶段的特征图被拷贝过来,与解码器阶段的的特征图拼接在一起参与后面的卷积与反卷积运算,相当于把浅层信息融合进来。
编码器与解码器阶段特征图的融合使用concat的方式,即将特征图大小相同的矩阵在通道维度上拼接起来。从图中可以看出编码器与解码器阶段的特征图由于卷积池化的原因大小并不完全相同,作者通过中心裁剪的方式将编码器阶段特征图裁剪了一下,使之与解码器对应阶段特征图大小相同。(裁剪可从图中特征图上的虚线可以看出)
另外,直接相加的融合方式也可以考虑,效果不一定比拼接差,而且显存消耗更小。
网络最终的输出是与输入大小相同的图像,通道数即为类别数目。最后用一个softmax就可以将每个位置像素点在各个通道的值变为分属各个类别的概率。训练时,将标签用one-hot编码也变成n个通道的图像(n为类别数),再用交叉熵损失函数计算。预测时,用argmax函数取每个位置概率最大值所在的通道,该像素点即被分类为该通道对应类别。
图中可以看出输出和输入并不相同,这是因为作者设置输入图像的时候进行了膨胀采样,实际预测的是输入图像的中心部分。至于为什么这么做,原因可能有以下几点:
复现,在paddlepaddle框架下UNet网络代码如下:
import numpy as np
import paddle
import paddle.fluid as fluid
from paddle.fluid.dygraph import to_variable
from paddle.fluid.dygraph import Layer
from paddle.fluid.dygraph import Conv2D
from paddle.fluid.dygraph import BatchNorm
from paddle.fluid.dygraph import Pool2D
from paddle.fluid.dygraph import Conv2DTranspose
class Encoder(Layer):
def __init__(self, num_channels, num_filters):
super().__init__()
self.conv1 = Conv2D(num_channels,
num_filters,
filter_size=3,
stride=1,
padding=1)
self.bn1 = BatchNorm(num_filters, act='relu')
self.conv2 = Conv2D(num_filters,
num_filters,
filter_size=3,
stride=1,
padding=1)
self.bn2 = BatchNorm(num_filters, act='relu')
self.pool = Pool2D(pool_size=2, pool_stride=2, pool_type='max', ceil_mode=True)
def forward(self, inputs):
x = self.conv1(inputs)
x = self.bn1(x)
x = self.conv2(x)
x = self.bn2(x)
x_pooled = self.pool(x)
return x, x_pooled
class Decoder(Layer):
def __init__(self, num_channels, num_filters):
super(Decoder, self).__init__()
self.up = Conv2DTranspose(num_channels=num_channels,
num_filters=num_filters,
filter_size=2,
stride=2)
self.conv1 = Conv2D(num_channels,
num_filters,
filter_size=3,
stride=1,
padding=1)
self.bn1 = BatchNorm(num_filters, act='relu')
self.conv2 = Conv2D(num_filters,
num_filters,
filter_size=3,
stride=1,
padding=1)
self.bn2 = BatchNorm(num_filters, act='relu')
def forward(self, inputs_prev, inputs):
x = self.up(inputs)
h_diff = (inputs_prev.shape[2] - x.shape[2])
w_diff = (inputs_prev.shape[3] - x.shape[3])
x = fluid.layers.pad2d(x, paddings=[h_diff//2, h_diff-h_diff//2, w_diff//2, w_diff-w_diff//2])
x = fluid.layers.concat([inputs_prev, x], axis=1)
x = self.conv1(x)
x = self.bn1(x)
x = self.conv2(x)
x = self.bn2(x)
return x
class UNet(Layer):
def __init__(self, num_classes=7):
super(UNet, self).__init__()
self.down1 = Encoder(num_channels=3, num_filters=64)
self.down2 = Encoder(num_channels=64, num_filters=128)
self.down3 = Encoder(num_channels=128, num_filters=256)
self.down4 = Encoder(num_channels=256, num_filters=512)
self.mid_conv1 = Conv2D(512, 1024, filter_size=1, padding=0, stride=1)
self.mid_bn1 = BatchNorm(1024, act='relu')
self.mid_conv2 = Conv2D(1024, 1024, filter_size=1, padding=0, stride=1)
self.mid_bn2 = BatchNorm(1024, act='relu')
self.up4 = Decoder(1024, 512)
self.up3 = Decoder(512, 256)
self.up2 = Decoder(256, 128)
self.up1 = Decoder(128, 64)
self.last_conv = Conv2D(num_channels=64, num_filters=num_classes, filter_size=1)
def forward(self, inputs):
x1, x = self.down1(inputs)
x2, x = self.down2(x)
x3, x = self.down3(x)
x4, x = self.down4(x)
#middle layers
x = self.mid_conv1(x)
x = self.mid_bn1(x)
x = self.mid_conv2(x)
x = self.mid_bn2(x)
x = self.up4(x4, x)
x = self.up3(x3, x)
x = self.up2(x2, x)
x = self.up1(x1, x)
x = self.last_conv(x)
return x
def main():
with fluid.dygraph.guard(fluid.CUDAPlace(0)):
model = UNet(num_classes=7)
x_data = np.random.rand(1, 3, 256, 256).astype(np.float32)
inputs = to_variable(x_data)
pred = model(inputs)
print(pred.shape)
if __name__ == "__main__":
main()
这里修改了下编码解码器中卷积的padding部分,使得输出与输入大小完全相同,上述代码运行结果如下:
[1, 7, 256, 256]
可以看出得到了与输入同样大小且通道为类别数的结果。