论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix

1 引言

1.1 背景

Traditionally, each of these tasks has been tackled with separate, special-purpose machinery (e.g., [14, 23, 18, 8, 10, 50, 30, 36, 16, 55, 58]), despite the fact that the setting is always the same: predict pixels from pixels. Our goal in this paper is to develop a common framework for all these problems.
论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix_第1张图片

1.2 解决的问题

Coming up with loss functions that force the CNN to do what we really want – e.g., output sharp, realistic images – is an open problem and generally requires expert knowledge.
It would be highly desirable if we could instead specify only a high-level goal, like “make the output indistinguishable from reality”, and then automatically learn a loss function appropriate for satisfying this goal. Fortunately, this is exactly what is done by the recently proposed Generative Adversarial Networks (GANs)
传统的CNN需要人工设定合理的损失函数(需要领域知识并且难度较大),GAN不仅可以制造出indistinguishable output ,同时可以自动学习损失函数。

2 解决方案

2.1 GAN、CGAN (修改损失函数)

GAN:无条件GAN是通过edge->y来得到G\D(只输入input img,只通过y和x求loss)
cGAN:是通过edge->y得到G,通过y - edge 得到D(同时对x,y,G(x)求loss)
论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix_第2张图片
D(x,y) and D(x,G(x,z))
在这里插入图片描述

D(y) and D(G(x,z))
在这里插入图片描述
论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix_第3张图片

2.2 损失函数对比在这里插入图片描述

论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix_第4张图片
论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix_第5张图片
在这里插入图片描述
在这里插入图片描述
论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix_第6张图片
论文阅读 Image-to-Image Translation with Conditional Adversarial Networks :pix2pix_第7张图片

你可能感兴趣的:(GAN,生成对抗网络)