论文阅读《Discriminative-Generative Representation Learning for One-Class Anomaly Detection》CVPR2021
生成式对抗网作为一种生成式自监督学习方法,在异常检测领域得到了广泛的研究。然而,由于生成器过于关注像素级的细节,因此它的表示学习(representation learning ability)能力有限,并且生成器难以从像鉴别器一样有效的标签预测借口任务中学习抽象的语义表示。
(a) Generative methods learn representations r by the reconstruction of input x,
(b) discriminative methods learn representations r by the prediction of label c
为了提高生成器的表示学习能力,我们提出了一种结合生成方法和判别方法的自监督学习框架。
We reuse the discriminator as a predictor to guide the generator to generate x that match the pretext task labels ct, thus the generator can learn representation r by using pretext tasks designed for discriminative methods.
我们重用鉴别器作为预测器来引导生成器生成与借口任务标签ct匹配的x,因此生成器可以利用为鉴别方法设计的借口任务来学习表示r。
生成器不再通过重构误差来学习表示,而是通过学习鉴别器的指导,并且可以受益于为鉴别方法设计的代理任务。
1,we first rotate the training samples randomly and generate labels.
2,Secondly, an encoder and a decoder are used to learn the encoding of samples in latent space by the image reconstruction of xr, meanwhile a discriminator is used to predict the rotation angle of the image.
3,Finally, the rotated image needs to be restored by the encoder, and the authenticity and rotation angle of the restored image are checked by the discriminator.
::::::: increases the performance of the top-performingGAN-based baseline by 6% on CIFAR-10
increases the performance of the top-performing GAN-based baseline by 2% on MVTAD.
our method is not suitable for detecting texture anomalies (e.g. carpet and grid)
We propose this discriminative-generative representation learning method for one-class anomaly detection task in this paper, named DGAD (discriminative-generative anomaly detection).
1.DGAD的生成器可以更有效地学习抽象语义表示,而没有标签语义歧义问题。据我们所知,这是第一次尝试从为歧视性方法设计的借口任务中受益。
2.我们的方法在多个基准数据集上的性能显著优于几种先进技术,在CIFAR-10上基于gan的基线性能提高了6%,在MVTAD上提高了2%。
3.实验证明,生成方法基于同一借口预测任务的表示,仅以重构误差作为异常分数即可接近鉴别方法的性能,并且在速度上具有很大的优势。
4.消融研究表明,绝对位置信息会降低几何变换借口任务中生成方法的表征学习能力,对不同几何变换任务中鉴别方法的表征学习能力有不同的影响,建议根据不同的借口任务仔细选择位置信息的使用。