参考自:https://github.com/junfu1115/DANet
#environment
CUDA:8.0
cudnn:7.1.3
python:3.5
pytorch-encoding:0.5.2 torch-encoding可进行下载
pytorch:1.0.0 pytorch/pytorch (master 分支)
gcc:5.4.0
ninja:1.8.2 https://www.claudiokuenzler.com/blog/756/install-newer-ninja-build-tools-ubuntu-14.04-trusty#.WxYrvFMvzJwInstall newer ninja-build (1.8.2) on Ubuntu 14.04 Trusty ::https://www.claudiokuenzler.com/blog/756/install-newer-ninja-build-tools-ubuntu-14.04-trusty#.WxYrvFMvzJw
We propose a Dual Attention Network (DANet) to adaptively integrate local features with their global dependencies based on the self-attention mechanism. And we achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff-10k dataset.
We train our DANet-101 with only fine annotated data and submit our test results to the official evaluation server.
Clone the repository:
git clone https://github.com/junfu1115/DANet.git
cd DANet
python setup.py install
Dataset
./datasets
4 . Evaluation
Download trained model DANet101 and put it in folder ./danet/cityscapes/model
Evaluation code is in folder ./danet/cityscapes
cd danet
For single scale testing, please run:
CUDA_VISIBLE_DEVICES=0,1,2,3 python test.py --dataset cityscapes --model danet --resume-dir cityscapes/model --base-size 2048 --crop-size 768 --workers 1 --backbone resnet101 --multi-grid --multi-dilation 4 8 16 --eval
CUDA_VISIBLE_DEVICES=0,1,2,3 python test.py --dataset cityscapes --model danet --resume-dir cityscapes/model --base-size 2048 --crop-size 1024 --workers 1 --backbone resnet101 --multi-grid --multi-dilation 4 8 16 --eval --multi-scales
CUDA_VISIBLE_DEVICES=0,1,2,3 python test.py --dataset cityscapes --model danet --resume-dir cityscapes/model --base-size 2048 --crop-size 768 --workers 1 --backbone resnet101 --multi-grid --multi-dilation 4 8 16
Evaluation Result:
The expected scores will show as follows:
(single scale testing denotes as ‘ss’ and multiple scale testing denotes as ‘ms’)
DANet101 on cityscapes val set (mIoU/pAcc): 79.93/95.97 (ss) and 81.49/96.41 (ms)
Training:
./danet/cityscapes
cd danet
You can reproduce our result by run:
CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset cityscapes --model danet --backbone resnet101 --checkname danet101 --base-size 1024 --crop-size 768 --epochs 240 --batch-size 8 --lr 0.003 --workers 2 --multi-grid --multi-dilation 4 8 16
Note that: We adopt multiple losses in end of the network for better training.
If DANet is useful for your research, please consider citing:
@article{fu2018dual,
title={Dual Attention Network for Scene Segmentation},
author={Fu, Jun and Liu, Jing and Tian, Haijie, and Fang, Zhiwei, and Lu, Hanqing},
journal={arXiv preprint arXiv:1809.02983},
year={2018}
}
Thanks PyTorch-Encoding, especially the Synchronized BN!