UNet++: A Nested U-Net Architecture for Medical Image Segmentation
UNet++ is a new general purpose image segmentation architecture for more accurate image segmentation. UNet++ consists of U-Nets of varying depths whose decoders are densely connected at the same resolution via the redesigned skip pathways, which aim to address two key challenges of the U-Net: 1) unknown depth of the optimal architecture and 2) the unnecessarily restrictive design of skip connections.
Paper
This repository provides the official Keras implementation of UNet++ in the following papers:
UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation
Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang
Arizona State University
IEEE Transactions on Medical Imaging (TMI)
paper | code
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang
Arizona State University
Deep Learning in Medical Image Analysis (DLMIA) 2018. (Oral)
paper | code | slides | poster | blog
Other implementation
[PyTorch] (by 4ui_iurz1)
[PyTorch] (by Hong Jing)
[PyTorch] (by ZJUGiveLab)
[Keras] (by Siddhartha)
What is in this repository
1. Available architectures
2. Available backbones
Backbone model
Name
Weights
VGG16
vgg16
imagenet
VGG19
vgg19
imagenet
ResNet18
resnet18
imagenet
ResNet34
resnet34
imagenet
ResNet50
resnet50
imagenet
imagenet11k-places365ch
ResNet101
resnet101
imagenet
ResNet152
resnet152
imagenet
imagenet11k
ResNeXt50
resnext50
imagenet
ResNeXt101
resnext101
imagenet
DenseNet121
densenet121
imagenet
DenseNet169
densenet169
imagenet
DenseNet201
densenet201
imagenet
Inception V3
inceptionv3
imagenet
Inception ResNet V2
inceptionresnetv2
imagenet
How to use UNet++
1. Requirements
Python 3.x, Keras 2.2.2, Tensorflow 1.4.1 and other common packages listed in requirements.txt.
2. Installation
git clone https://github.com/MrGiovanni/UNetPlusPlus.git
cd UNetPlusPlus
pip install -r requirements.txt
git submodule update --init --recursive
3. Running the scripts
CUDA_VISIBLE_DEVICES=0 python DSB2018_application.py --run 1 \
--arch Xnet \
--backbone vgg16 \
--init random \
--decoder transpose \
--input_rows 96 \
--input_cols 96 \
--input_deps 3 \
--nb_class 1 \
--batch_size 2048 \
--weights None \
--verbose 1
CUDA_VISIBLE_DEVICES=0 python BRATS2013_application.py --run 1 \
--arch Xnet \
--backbone vgg16 \
--init random \
--decoder transpose \
--input_rows 256 \
--input_cols 256 \
--input_deps 3 \
--nb_class 1 \
--batch_size 2048 \
--weights None \
--verbose 1
Code examples for your own data
Train a UNet++ structure (Xnet in the code):
from segmentation_models import Unet, Nestnet, Xnet
# prepare data
x, y = ... # range in [0,1], the network expects input channels of 3
# prepare model
model = Xnet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build UNet++
# model = Unet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build U-Net
# model = NestNet(backbone_name='resnet50', encoder_weights='imagenet', decoder_block_type='transpose') # build DLA
model.compile('Adam', 'binary_crossentropy', ['binary_accuracy'])
# train model
model.fit(x, y)
To do
Add VGG backbone for UNet++
Add ResNet backbone for UNet++
Add ResNeXt backbone for UNet++
Add DenseNet backbone for UNet++
Add Inception backbone for UNet++
Add Tiramisu and Tiramisu++
Add FPN++
Add Linknet++
Add PSPNet++
Citation
If you use UNet++ for your research, please cite our papers:
@article{zhou2019unetplusplus,
title={UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation},
author={Zhou, Zongwei and Siddiquee, Md Mahfuzur Rahman and Tajbakhsh, Nima and Liang, Jianming},
journal={IEEE Transactions on Medical Imaging},
year={2019},
publisher={IEEE}
}
@incollection{zhou2018unetplusplus,
title={Unet++: A Nested U-Net Architecture for Medical Image Segmentation},
author={Zhou, Zongwei and Siddiquee, Md Mahfuzur Rahman and Tajbakhsh, Nima and Liang, Jianming},
booktitle={Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support},
pages={3--11},
year={2018},
publisher={Springer}
}
Acknowledgments
This repository has been built upon qubvel/segmentation_models. We appreciate the effort of Pavel Yakubovskiy for providing well-organized segmentation models to the community. This research has been supported partially by NIH under Award Number R01HL128785, by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH. This is a patent-pending technology.