self-training | 域迁移 | source-free(二)

  • 文章转自微信公众号:机器学习炼丹术
  • 论文名称:Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data
  • 作者:炼丹兄(欢迎交流共同进步)
  • 联系方式:微信cyx645016617

self-training | 域迁移 | source-free(二)_第1张图片

0 摘要

无监督域适配旨在对齐标记的源域和未标记的目标域,但它需要访问经常引发关注数据隐私、数据便携性和数据传输效率。我们研究的无监督模型自适应(unsupervised model adaptation, UMA),或称为无源数据的无监督域自适应(Unsupervised Domain Adaptation without Source Data),是一种旨在适应源数据,在不访问源数据的情况下对目标分布进行建模。

1 相关研究

1.1 Unsupervised model adaptation

UMA任务是将在source data上训练的source model,可以在不接触source data的情况下,适应target data。

Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning. PMLR 2020

Source data-absent unsupervised domain
adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.

这两篇冻结source model的classifier然后用information maximumization的目标在target data进行训练,来实现分类域迁移的任务。

Model adaptation: Unsupervised domain
adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pages 9641–9650, 2020

为了解决分类任务的域迁移,使用conditional GAN来生成训练数据,生成数据具有source-alike semantics 和 target-alike classification。

A free lunch for unsupervised domain adaptive object detection without source data. arXiv preprint arXiv:2012.05400, 2020.

使用self-entropy descent algorithm来提高模型的域迁移能力,针对object detection任务。

Uncertainty reduction for model adaptation in semantic sementation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), number CONF. IEEE, 2021.

通过降低source model对于target data 的prediction的不确定性,来实现分割任务的域迁移。

Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 615–625, 2021

Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE conference on computer vision and patternrecognition, pages 8156–8164, 2018.

这两篇是通过生成的方式,来解决UMA问题,生成source data来得到可供参考的source分布。

1.2 Momery-based Learning

Memory-based learning已经被研究了很多了。

Memory networks. arXiv preprint arXiv:1410.3916,2014.

最早的RNN这些Memery model探索了如何通过模型结构来为监督学习存储Memory。

Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.

Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pages 1195–1204, 2017.

Semi-supervised deep learning with memory. In
Proceedings of the European Conference on Computer Vision (ECCV), pages 268–283, 2018.

这三篇扩展了memory机制到半监督学习任务中,它使用历史的模型来规范当前模型,从而产生更稳定、更好的效果。其中Mean-teacher的方法,利用moving-average模型来用到UDA当中。

这些方法需要在训练过程中标注数据。他们对于UMA任务是无效的,因为要么在训练过程中崩溃、要么没有什么效果。

2 Historical Contrastive Learning

这一部分具体说说方法。HCL包含两个部分:

  • historical contrastive instance discrimination:来学习instance-discriminative target representations that generalize well to new domains。
  • historical contrastive category discrimination:鼓励学习category-discriminative target representations。

2.1 Historical Contrastive Instance Discrimination

self-training | 域迁移 | source-free(二)_第2张图片

2.2 Historical contrastive category discrimination

self-training | 域迁移 | source-free(二)_第3张图片

你可能感兴趣的:(笔记,人工智能,深度学习,无监督)