【论文研读】SimCLR Google2020.2提出最新自监督方法

TitleA Simple Framework for Contrastive Learning of Visual Representations

AuthorTing ChenGeoffrey Hinton... (Google Research)

参考:Hinton组力作:ImageNet无监督学习最佳性能一次提升7%,媲美监督学习

 

网络结构

【论文研读】SimCLR Google2020.2提出最新自监督方法_第1张图片【论文研读】SimCLR Google2020.2提出最新自监督方法_第2张图片

 

Data augmentation

1. Composition of data augmentation operations is crucial for learning good representations

【论文研读】SimCLR Google2020.2提出最新自监督方法_第3张图片

2. No single transformation suffices to learn good representations

3. it is critical to compose cropping with color distortion

4. data augmentation that does not yield accuracy benefits for supervised learning can still help considerably with

contrastive learning.

【论文研读】SimCLR Google2020.2提出最新自监督方法_第4张图片

 

Architectures for Encoder and Head

1. Unsupervised contrastive learning benefits (more) from bigger models

【论文研读】SimCLR Google2020.2提出最新自监督方法_第5张图片

2. G(*):A nonlinear projection is better than a linear projection

【论文研读】SimCLR Google2020.2提出最新自监督方法_第6张图片

3. Contrastive learning benefits (more) from larger batch sizes and longer training

【论文研读】SimCLR Google2020.2提出最新自监督方法_第7张图片

 

Results

【论文研读】SimCLR Google2020.2提出最新自监督方法_第8张图片

【论文研读】SimCLR Google2020.2提出最新自监督方法_第9张图片

你可能感兴趣的:(论文研读,深度学习,神经网络,自监督)