论文笔记:Adversarial Net与IQA

论文一: 2018年CVPR: Hallucinated-IQA: No-Reference Image Quality Assessment via Adversarial Learning

@article{Lin2018Hallucinated,
  title={Hallucinated-IQA: No-Reference Image Quality Assessment via Adversarial Learning},
  author={Lin, Kwan Yee and Wang, Guanxiang},
  year={2018},
}

文章链接:http://cn.arxiv.org/pdf/1804.01681

在这个工作中,作者提出了一个幻觉引导的质量回归网络来解决无参考图像质量评价问题。首先基于失真图像生成一个幻觉参考图像,以弥补缺少的真实参考信息(示例见Figure 1)。然后,将幻觉参考图像的信息与失真图像进行配对,并将它们转发给回归量,以便在generator内隐式排序关系的指导下学习感知差异,最后进行质量预测。

论文笔记:Adversarial Net与IQA_第1张图片

Figure 1: An illustration of our motivation. The first column is the Ground-truth Reference image which is undistorted. The second column shows several kinds of distortion that is easily happened. The third row demonstrates the hallucinated reference images which are generated by our approach. The fourth column is the discrepancy map which captures rich information that can be utilized to guide the learning of quality regression network to get high accuracy results.

这是一篇2018年做NR-IQA task的CVPR,算法框架如下:

论文笔记:Adversarial Net与IQA_第2张图片

Figure 2: An illustration of our proposed Hallucinated-IQA framework. It consists of three strongly related subnets. (a) Quality-Aware Generative Network is used to generate hallucinated reference images. In order to get high resolution hallucinated images, a quality-aware loss is introduced to the learning process. (b) Hallucination-Guided Quality Regression Network is in a position to incorporate the discrepancy information between the hallucinated image and distorted encoded in the discrepancy map. The incorporated discrepancy information together with high-level semantic fusion from the generative network can supply the regression network with rich information and greatly guide the network learning. (3)Since the results of the hallucinated image are crucial for the final prediction, IQA-Discriminator is proposed to further refine the hallucinated image.

IQA-Discriminator Dloss function 定义为:

论文笔记:Adversarial Net与IQA_第3张图片

质量感知生成网络G的对抗损失函数定义为:

G的整体损失函数为:

幻觉引导( hallucination-guided)质量回归网络R的损失函数表示为:

训练过程如下:

论文笔记:Adversarial Net与IQA_第4张图片

注:应当注意的是,在训练过程中,步骤4和8中的R是与G相互加强的方式优化的相同。虽然在步骤9的项L_s中预先使用的回归网络R是独立的,G没有特征融合的固定权重。

生成对抗网络意在通过减小两个概率分布的距离从而利用生成模块产生出与给定数据分布类似的数据。
 

论文二: 2018AAAI:RAN4IQA: Restorative Adversarial Nets for No-Reference Image Quality Assessment

文章链接:https://arxiv.org/pdf/1712.05444.pdf

本文的idea和上一篇论文2018CVPR框架类似,也是先对失真图像进行恢复,并将复原图像和失真图像作为输入,输出失真图像的质量分数。(这是patch-based),下图为框架图(来自于原论文)

论文笔记:Adversarial Net与IQA_第5张图片

        整个算法模型中包括复原模块,区分器模块,以及质量分数评估模块,复原模块将失真图像小块经过网络尽可能的复原,区分器用来区分是否是复原的图像块还是参考图像块,评估模块通过评价复原图像与参考图像之间的距离学习出图像的质量分数。
 

你可能感兴趣的:(IQA)