风格迁移

gatys的论文,vgg作为loss不更新权值,更新的是从原画(content)复制过来的Tensor (target),不断的更新图像,完成迁移。

李菲菲用的生成网络做的这个事,等于是训练了一个专门的迁移网络。更新的是生成器G.

这两种方法貌似都与pix2pix不一样,好像他们一个风格的图像就一张。cartoon好像沿用了这种方法,略有取巧啊,哭唧唧。

 

gatys的:

Content Reconstructions. We can visualise the information at different processing stages in the CNN by reconstructing the input image from only knowing the network’s responses in a particular layer. We reconstruct the input image from from layers ‘conv1 2’ (a), ‘conv2 2’ (b), ‘conv3 2’ (c), ‘conv4 2’ (d) and ‘conv5 2’ (e) of the original VGG-Network. We find that reconstruction from lower layers is almost perfect (a–c). In higher layers of the network, detailed pixel information is lost while the high-level content of the image is preserved (d,e).

你可能感兴趣的:(迁移)