AdaptSegNet是经典的基于对抗学习的域适应方法。这一类方法训练一个判别器来使得目标域的分布在像素空间(output space)或者特征空间(feature map)上进行对齐(AdaptSegNet证明语义分割任务而言,在像素空间上的对齐优于特征空间),从而使得分割模型的性能从源域泛化到目标域上。除了AdaptSeg之外,这一方向上的经典工作还有:
原文: https://arxiv.org/abs/1612.02649
数据集:Cityscapes、SYNTHIA、GTA5
Hoffman J, Wang D, Yu F, et al. Fcns in the wild: Pixel-level adversarial and constraint-based adaptation[J]. arXiv preprint arXiv:1612.02649, 2016.
原文:https://arxiv.org/abs/1802.10349
代码:https://github.com/wasidennis/AdaptSegNet
数据集:Cityscapes、SYNTHIA、GTA5、Synscapes
Tsai Y H, Hung W C, Schulter S, et al. Learning to adapt structured output space for semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7472-7481.
原文:https://arxiv.org/abs/1811.12833
代码:https://github.com/valeoai/ADVENT
数据集: Cityscapes、SYNTHIA、GTA5
Vu T H, Jain H, Bucher M, et al. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 2517-2526.
原文:https://arxiv.org/abs/2004.07703
代码:https://github.com/feipan664/IntraDA
数据集: Cityscapes、SYNTHIA、GTA5、Synscapes
Pan F, Shin I, Rameau F, et al. Unsupervised intra-domain adaptation for semantic segmentation through self-supervision[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 3764-3773
原文:https://arxiv.org/abs/1907.12859
Tasar O, Happy S L, Tarabalka Y, et al. ColorMapGAN: Unsupervised domain adaptation for semantic segmentation using color mapping generative adversarial networks[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(10): 7178-7193.
原文:https://arxiv.org/abs/2108.06337
代码:GitHub - royee182/DPL
数据集: Cityscapes、SYNTHIA、GTA5
Cheng Y, Wei F, Bao J, et al. Dual Path Learning for Domain Adaptation of Semantic Segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 9082-9091.