ECCV 2022:Few-Shot Classification with Contrastive Learning

1、动机

A two-stage training paradigm consisting of sequential pretraining and meta-training stages has been widely used in current fewshot learning (FSL) research. However, the potential of contrastive learning in both stages of FSL training paradigm is still not fully exploited.

2、提出的方法

(1)总体方法

propose a novel contrastive learning-based framework that seamlessly integrates contrastive learning into both stages to improve the performance of fewshot classification. 

In the pre-training stage, we propose a self-supervised contrastive loss in the forms of feature vector vs. feature map and feature map vs. feature map, which uses global and local information to learn good initial representations.

 In the meta-training stage, we propose a cross-view episodic training mechanism to perform the nearest centroid classification on two different views of the same episode and adopt a distance-scaled contrastive loss based on them.

模型的整体框架

(2)Details

A. Pre-training

A.1 Global self-supervised contrastive loss

这部分是常见的对比损失

A.2 Local self-supervised contrastive loss

局部损失包含两个部分,一个是map-map,一个是vector-map。前者使用类似于注意力机制实现,后者计算向量与map每一个位置关系。

A.3 Global supervised contrastive loss.

有监督对比损失,即使用标签进行分类监督。

B. Meta-training

B.1 Cross-view Episodic Training.

加了对比学习的轮次迭代。

将episode E的数据做两个增强,得到E1和E2,对数据提取特征,得到全局向量。然后。为多个支持集的多个类别求取对应的原型。再然后,使用注意力模块,将各个支持集的原型进行对齐。最后,利用对齐后的原型与查询集的特征,构建对比损失。

B.2 Distance-scaled Contrastive Loss.

Since contrastive learning approaches work solely at the instance level, it is superficial to simply add contrastive loss into meta-training without taking full advantage of the episodic training mechanism catered for FSL.

上面将对比学习引入到轮次迭代是一种直观简单的做法,并没有跟轮次迭代的特点相结合。作者做了两个改进,一个是将对比范围由原型扩展到了整个支持集以及原型,另一个是增加了距离参数and additional prototypes  on both views of the original episode, 从而 reduce the similarities of queries to their positives。

3、实验

主要看一下消融实验:

你可能感兴趣的:(ECCV 2022:Few-Shot Classification with Contrastive Learning)