使用python学习世界名画的画画风格

首先, 这篇文章来自知乎一个大佬,进行部分的修改以及重做!YHZ

附上我的转化结果:

     使用python学习世界名画的画画风格_第1张图片

使用python学习世界名画的画画风格_第2张图片

使用python学习世界名画的画画风格_第3张图片

 

 

第一张是我们学校,第二张是梵高的星空,第三张是转化的结果。
是不是看起来美多了,很有文艺气息。
接下来就是介绍如何实现的了。

首先,需要安装使用的模块,pip一键搞定:

pip install keras 

pip install h5py

pip install tensorflow

TensorFlow的安装可能不的话下载的比较慢,也可以源码安装。自己把握。(目前不知道TensorFlow支持python3.7不?)

然后下载VGG16模型,百度云连接:
链接:https://pan.baidu.com/s/1pLUPOkN 密码:4nud接着加一点额外的知识: 在磁盘中创建 以 . 开头的文件 md + 你创建的文件目录  
接下来,附上源码:

# -*- coding: utf-8 -*-

from __future__ import print_function
from keras.preprocessing.image import load_imgimg_to_array
from scipy.misc import imsave
import numpy as np
from scipy.optimize import fmin_l_bfgs_b
import time
import argparse

from keras.applications import vgg16
from keras import backend as K

parser = argparse.ArgumentParser(description='Neural style transfer with Keras.')
parser.add_argument('base_image_path'metavar='base'type=str,
help='Path to the image to transform.')
parser.add_argument('style_reference_image_path'metavar='ref'type=str,
help='Path to the style reference image.')
parser.add_argument('result_prefix'metavar='res_prefix'type=str,
help='Prefix for the saved results.')
parser.add_argument('--iter'type=intdefault=10required=False,
help='Number of iterations to run.')
parser.add_argument('--content_weight'type=floatdefault=0.025required=False,
help='Content weight.')
parser.add_argument('--style_weight'type=floatdefault=1.0required=False,
help='Style weight.')
parser.add_argument('--tv_weight'type=floatdefault=1.0required=False,
help='Total Variation weight.')

args = parser.parse_args()
base_image_path = args.base_image_path
style_reference_image_path = args.style_reference_image_path
result_prefix = args.result_prefix
iterations = args.iter

# these are the weights of the different loss components
total_variation_weight = args.tv_weight
style_weight = args.style_weight
content_weight = args.content_weight

# dimensions of the generated picture.
widthheight = load_img(base_image_path).size
img_nrows = 400
img_ncols = int(width * img_nrows / height)

# util function to open, resize and format pictures into appropriate tensors


def preprocess_image(image_path):
img = load_img(image_pathtarget_size=(img_nrowsimg_ncols))
img = img_to_array(img)
img = np.expand_dims(imgaxis=0)
img = vgg16.preprocess_input(img)
return img

# util function to convert a tensor into a valid image


def deprocess_image(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((3img_nrowsimg_ncols))
x = x.transpose((120))
else:
x = x.reshape((img_nrowsimg_ncols3))
# Remove zero-center by mean pixel
x[::0] += 103.939
x[::1] += 116.779
x[::2] += 123.68
# 'BGR'->'RGB'
x = x[::::-1]
x = np.clip(x0255).astype('uint8')
return x

# get tensor representations of our images
base_image = K.variable(preprocess_image(base_image_path))
style_reference_image = K.variable(preprocess_image(style_reference_image_path))

# this will contain our generated image
if K.image_data_format() == 'channels_first':
combination_image = K.placeholder((13img_nrowsimg_ncols))
else:
combination_image = K.placeholder((1img_nrowsimg_ncols3))

# combine the 3 images into a single Keras tensor
input_tensor = K.concatenate([base_image,
style_reference_image,
combination_image]axis=0)

# build the VGG16 network with our 3 images as input
# the model will be loaded with pre-trained ImageNet weights
model = vgg16.VGG16(input_tensor=input_tensor,
weights='imagenet'include_top=False)
print('Model loaded.')

# get the symbolic outputs of each "key" layer (we gave them unique names).
outputs_dict = dict([(layer.namelayer.output) for layer in model.layers])

# compute the neural style loss
# first we need to define 4 util functions

# the gram matrix of an image tensor (feature-wise outer product)


def gram_matrix(x):
assert K.ndim(x) == 3
if K.image_data_format() == 'channels_first':
features = K.batch_flatten(x)
else:
features = K.batch_flatten(K.permute_dimensions(x(201)))
gram = K.dot(featuresK.transpose(features))
return gram

# the "style loss" is designed to maintain
# the style of the reference image in the generated image.
# It is based on the gram matrices (which capture style) of
# feature maps from the style reference image
# and from the generated image


def style_loss(stylecombination):
assert K.ndim(style) == 3
assert K.ndim(combination) == 3
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))

# an auxiliary loss function
# designed to maintain the "content" of the
# base image in the generated image


def content_loss(basecombination):
return K.sum(K.square(combination - base))

# the 3rd loss function, total variation loss,
# designed to keep the generated image locally coherent


def total_variation_loss(x):
assert K.ndim(x) == 4
if K.image_data_format() == 'channels_first':
a = K.square(x[:::img_nrows - 1:img_ncols - 1] - x[::1::img_ncols - 1])
b = K.square(x[:::img_nrows - 1:img_ncols - 1] - x[:::img_nrows - 11:])
else:
a = K.square(x[::img_nrows - 1:img_ncols - 1:] - x[:1::img_ncols - 1:])
b = K.square(x[::img_nrows - 1:img_ncols - 1:] - x[::img_nrows - 11::])
return K.sum(K.pow(a + b1.25))

# combine these loss functions into a single scalar
loss = K.variable(0.)
layer_features = outputs_dict['block4_conv2']
base_image_features = layer_features[0:::]
combination_features = layer_features[2:::]
loss += content_weight * content_loss(base_image_features,
combination_features)

feature_layers = ['block1_conv1''block2_conv1',
'block3_conv1''block4_conv1',
'block5_conv1']
for layer_name in feature_layers:
layer_features = outputs_dict[layer_name]
style_reference_features = layer_features[1:::]
combination_features = layer_features[2:::]
sl = style_loss(style_reference_featurescombination_features)
loss += (style_weight / len(feature_layers)) * sl
loss += total_variation_weight * total_variation_loss(combination_image)

# get the gradients of the generated image wrt the loss
grads = K.gradients(losscombination_image)

outputs = [loss]
if isinstance(grads(listtuple)):
outputs += grads
else:
outputs.append(grads)

f_outputs = K.function([combination_image]outputs)


def eval_loss_and_grads(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((13img_nrowsimg_ncols))
else:
x = x.reshape((1img_nrowsimg_ncols3))
outs = f_outputs([x])
loss_value = outs[0]
if len(outs[1:]) == 1:
grad_values = outs[1].flatten().astype('float64')
else:
grad_values = np.array(outs[1:]).flatten().astype('float64')
return loss_valuegrad_values

# this Evaluator class makes it possible
# to compute loss and gradients in one pass
# while retrieving them via two separate functions,
# "loss" and "grads". This is done because scipy.optimize
# requires separate functions for loss and gradients,
# but computing them separately would be inefficient.


class Evaluator(object):

def __init__(self):
self.loss_value = None
self.grads_values = None

def loss(selfx):
assert self.loss_value is None
loss_valuegrad_values = eval_loss_and_grads(x)
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value

def grads(selfx):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values

evaluator = Evaluator()

# run scipy-based optimization (L-BFGS) over the pixels of the generated image
# so as to minimize the neural style loss
if K.image_data_format() == 'channels_first':
x = np.random.uniform(0255(13img_nrowsimg_ncols)) - 128.
else:
x = np.random.uniform(0255(1img_nrowsimg_ncols3)) - 128.

for in range(iterations):
print('Start of iteration'i)
start_time = time.time()
xmin_valinfo = fmin_l_bfgs_b(evaluator.lossx.flatten(),
fprime=evaluator.gradsmaxfun=20)
print('Current loss value:'min_val)
# save current generated image
img = deprocess_image(x.copy())
fname = result_prefix + '_at_iteration_%d.png' % i
imsave(fnameimg)
end_time = time.time()
print('Image saved as'fname)
print('Iteration %d completed in %ds' % (iend_time - start_time))

将上面代码复制、粘贴到一个文件,例如crawl.py,然后执行代码:

python crawl.py 待转化图片路径   模板图片路径   保存的生产图片路径加名称(注意不需要有.jpg等后缀)python crawl.py    ./me.jpg    ./starry_night.jpg    ./me_t

 

然后就开始进行迭代,注意如果按上述代码执行,默认只迭代10次,可能效果不是太好,可以在末尾加上迭代次数:


python crawl.py    ./me.jpg    ./starry_night.jpg    ./me_t   --iter  20

这样将会迭代20次

 

尽量使用CPU差不多的电脑,不然这三十张就是几个小时的时间才能完成 !

 

 

你可能感兴趣的:(python,tensorflow,keras)