supervised contrastive learning 解读

SupCon 定义:

Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes.

novelties:

属于同一类的归一化后的特征表示靠得越近越好,不同类的靠的越远越好。并且,一个anchor 样本,具有多个正样本,sel-sup 通常只用一个正样本;负样本也是多个。

传统的limitations:

cross-entropy loss的缺陷:面对噪声标签,鲁棒性差,possibility of poor margins ??

传统的方法,在大数据集上的实验效果不好,比如说Imagenet。

triplet loss每个anchor样本只对比一个正样本和负样本,

N-pair loss 每个anchor样本,一个正样本,多个负样本。

Related work

Merging the findings of our paper and CCLP is a promising direction for semi-supervised learning research.

supervised contrastive learning 解读_第1张图片

self-supervised learning 和 supervised contrastive 对比

左侧是self-sup,在特征表示空间,即使是同一类的样本也会距离比较远,右侧是有监督的对比学习,同一类样本,相距比较近。

你可能感兴趣的:(CV,深度学习,神经网络,pytorch,人工智能,机器学习)