[Paper] DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks
[Year] ICCV 2017
[Author] Andrey Ignatov, Nikolay Kobyshev, Kenneth Vanhoey, Radu Timofte , Luc Van Gool
[Pages] http://people.ee.ethz.ch/~ihnatova/index.html
[Description]
1) 从变换的角度出发, 学习一个从低质量图像到高质量图片的变换函数
2) 变换部分采用残差快结构的CNN,定义了4个loss (color, texture, content, variance). color loss是图像进行高斯模糊后的均方差, texture loss是adversarial loss, content loss是perceptual loss, variance loss是图像梯度的模.
3) 提出了用于图像质量增强的数据集DPED, 包括iPhone, BlackBerry和Sony三种手机与Canon单反相机的图相对.
[Paper] WESPE: Weakly Supervised Photo Enhancer for Digital Cameras
[Year] arXiv 1709
[Author] Andrey Ignatov, Nikolay Kobyshev, Radu Timofte , Kenneth Vanhoey, Luc Van Gool
[Pages] http://people.ee.ethz.ch/~ihnatova/index.html
[Description]
1) 弱监督, 训练时无需成对的低质量图像和高质量图像. 用两个adversarial losses (color和texture)保证将低质量图像变换到高质量图像所在的域
2) 定义一content loss保证增强后的图像与输入图像的content consistency. 注意此处是将增强后的图像backward map到输入空间, 在输入空间定义的perceptual loss
3) 定义一total variation (TV)保证输出的平滑
4) 本文的思路及loss的设计来自DPED
[Paper] Aesthetic-Driven Image Enhancement by Adversarial Learning
[Year] arXiv 1707
[Author] Yubin Deng, Chen Change Loy, Xiaoou Tang
[Pages]
[Description]
1) weakly supervised方法, 学习crop和色彩变换参数, 增强aesthetic quality
[Paper] Scribbler: Controlling Deep Image Synthesis with Sketch and Color
[Year] CVPR 2017
[Author] Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays
[Pages] http://scribbler.eye.gatech.edu/
[Description]
1) 提出一种基于CNN和GAN的图像生成方案, 通过sketch和color stoke生成彩色图像. 可以用于交互式图像生成, visual research等任务中.
2) 方法的重点是提出的loss, 总的loss包括四部分: per-pixel L2 loss, L2 feature loss, adversarial loss (使生成图像更逼真, 更多变化), total variance loss (encourage smoothness).
3) 训练分Sketch-based Photo Synthesis和User-guided Colorization两阶段.
[Paper] TextureGAN: Controlling Deep Image Synthesis with Texture Patches
[Year] arXiv 1706
[Author] Wenqi Xian, Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays
[Pages]
[Description]
1) 提出了一种纹理生成的pipeline: 给定sketch, 用户将texture patch拖到目标上, 系统即可自动填充该目标区域的纹理
2) 训练分为两个阶段. 第一阶段为ground-truth pre-training, 用feature loss(应该就是perceptual loss), adversarial loss, style loss(利用Gram矩阵), pixel loss和color loss. 第二阶段为external texture fine-tuning, 从生成的纹理区域中随机提取patch,与外部纹理样本(?)对比,计算loss, 以生成更逼真的纹理
3) 论文已提交至CVPR2018, 目前未公布代码. 目前paper中有些不明了之处(如pixel loss和color loss的定义, fine-tuning阶段的细节等), 可以之后跟进.
[Paper] Globally and Locally Consistent Image Completion
[Year] SIGGRAPH 2017
[Author] Satoshi Iizuka, Edgar Simo-Serra, Hiroshi Ishikawa
[Pages] http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/en/
[Description]
1) 提出一种基于GAN的图像补全方法. Completion Network基于CNN用于补全图像中的孔洞, 使用MSE loss; Context Discriminator分为global和local两只, 其实就是用相似结构的CNN提取特征, 再将其concat起来, 最后输出一个概率, 用于GAN loss的计算.