Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576).
In this assignment, you will:
Most of the algorithms you’ve studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you’ll optimize a cost function to get pixel values!
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a “content” image © and a “style” image (S), to create a “generated” image (G). The generated image G combines the “content” of the image C with the “style” of image S.
In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).
Let’s see how you can do this.
迁移学习定义:Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.
Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we’ll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers).
Run the following code to load parameters from the VGG model. This may take a few seconds.
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
WARNING: Logging before flag parsing goes to stderr.
W0902 13:40:48.535030 4644 deprecation_wrapper.py:119] From F:\jupyter\吴恩达深度学习第四课第四周作业\Neural Style Transfer\nst_utils.py:124: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.
{'input': , 'conv1_1': , 'conv1_2': , 'avgpool1': , 'conv2_1': , 'conv2_2': , 'avgpool2': , 'conv3_1': , 'conv3_2': , 'conv3_3': , 'conv3_4': , 'avgpool3': , 'conv4_1': , 'conv4_2': , 'conv4_3': , 'conv4_4': , 'avgpool4': , 'conv5_1': , 'conv5_2': , 'conv5_3': , 'conv5_4': , 'avgpool5': }
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable’s value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the tf.assign function. In particular, you will use the assign function like this:
该模型存储在一个python字典中,其中每个变量名都是键,相应的值是一个包含该变量值的张量,要通过此网络运行图像,只需将图像提供给模型。 在TensorFlow中,你可以使用tf.assign函数来做到这一点:
model["input"].assign(image)
This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer 4_2
when the network is run on this image, you would run a TensorFlow session on the correct tensor conv4_2
, as follows:
这将图像作为输入给模型,在此之后,如果想要访问某个特定层的激活,比如4_2,请这样做:
sess.run(model["conv4_2"])
We will build the NST algorithm in three steps:
In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
"""Entry point for launching an IPython kernel.
[外链图片转存失败(img-yzr0Smws-1567427174091)(output_7_2.png)]
The content image © shows the Louvre museum’s pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.
** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**
As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes.
We would like the “generated” image G to have similar content as the input image C. Suppose you have chosen some layer’s activations to represent the content of an image. In practice, you’ll get the most visually pleasing results if you choose a layer in the middle of the network–neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)
So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let a ( C ) a^{(C)} a(C) be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as a [ l ] ( C ) a^{[l](C)} a[l](C), but here we’ll drop the superscript [ l ] [l] [l] to simplify the notation.) This will be a n H × n W × n C n_H \times n_W \times n_C nH×nW×nC tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let a ( G ) a^{(G)} a(G) be the corresponding hidden layer activation. We will define as the content cost function as:
(1) J c o n t e n t ( C , G ) = 1 4 × n H × n W × n C ∑ all entries ( a ( C ) − a ( G ) ) 2 J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} Jcontent(C,G)=4×nH×nW×nC1all entries∑(a(C)−a(G))2(1)
Here, n H , n W n_H, n_W nH,nW and n C n_C nC are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that a ( C ) a^{(C)} a(C) and a ( G ) a^{(G)} a(G) are the volumes corresponding to a hidden layer’s activations. In order to compute the cost J c o n t e n t ( C , G ) J_{content}(C,G) Jcontent(C,G), it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn’t needed to compute J c o n t e n t J_{content} Jcontent, but it will be good practice for when you do need to carry out a similar operation later for computing the style const J s t y l e J_{style} Jstyle.)(从技术上讲,不需要这个展开步骤来计算Jcontent,但是当您以后需要执行类似的操作来计算Jstyle时,这将是一个很好的实践。)
Exercise: Compute the “content cost” using TensorFlow.
Instructions: The 3 steps to implement this function are:
X.get_shape().as_list()
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)获取维度信息
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)由三维将为二维
a_C_unrolled = tf.transpose(tf.reshape(a_C,[n_H*n_W,n_C]))
a_G_unrolled = tf.transpose(tf.reshape(a_G,[n_H*n_W,n_C]))
# compute the cost with tensorflow (≈1 line)
#tf.reduce_sum()计算张量维数中元素的和。(弃用参数)
J_content = (1 / (4 * n_H * n_W * n_C))*tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled,a_G_unrolled)))
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
J_content = 6.7655935
Expected Output:
**J_content** | 6.76559 |
需要记住的是:
(1)内容成本采用神经网络的隐层激活,并测量a© 与 a(G)的区别。
(2)我们以后最小化内容成本时,这将有助于确保G的内容与C相似。
For our running example, we will use the following style image:
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
"""Entry point for launching an IPython kernel.
[外链图片转存失败(img-mg1apkRt-1567427174092)(output_14_2.png)]
This painting was painted in the style of impressionism.
Lets see how you can now define a “style” const function J s t y l e ( S , G ) J_{style}(S,G) Jstyle(S,G).
The style matrix is also called a “Gram matrix.” In linear algebra, the Gram matrix G of a set of vectors ( v 1 , … , v n ) (v_{1},\dots ,v_{n}) (v1,…,vn) is the matrix of dot products, whose entries are G i j = v i T v j = n p . d o t ( v i , v j ) {\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) } Gij=viTvj=np.dot(vi,vj). In other words, G i j G_{ij} Gij compares how similar v i v_i vi is to v j v_j vj: If they are highly similar, you would expect them to have a large dot product, and thus for G i j G_{ij} Gij to be large.
Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but G G G is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image G G G. We will try to make sure which G G G we are referring to is always clear from the context.
In NST, you can compute the Style matrix by multiplying the “unrolled” filter matrix with their transpose:
风格矩阵又名“格拉姆矩阵”,在线性代数中,一组向量(v1,…,v2)(v1,…,v2)的格拉姆矩阵G是点乘的矩阵,计算细节是Gij=vTivj=np.dot(vi,vj)Gij=viTvj=np.dot(vi,vj),换句话说,GijGij比较了vivi与vjvj的相似之处,如果他们非常相似,那么它们的点积就会很大,所以GijGij就很大。
请注意,这里的变量名是有冲突的,我们遵循论文中的术语,但是GG表示风格矩阵(或Gram matrix)的同时也表示了生成的图像GG,我们将尽量确保我们所说的GG在上下文中指向清晰。
在神经风格转换中,可以通过将降维了的过滤器矩阵与其转置相乘来计算风格矩阵
(注意:内容迁移的算法是根据求得的神经原理的激活值来计算的;但是风格的迁移算法是根据权重的相关关系来计算的,请记住是权重W的相关关系。):
The result is a matrix of dimension ( n C , n C ) (n_C,n_C) (nC,nC) where n C n_C nC is the number of filters. The value G i j G_{ij} Gij measures how similar the activations of filter i i i are to the activations of filter j j j.
One important part of the gram matrix is that the diagonal elements such as G i i G_{ii} Gii also measures how active filter i i i is. For example, suppose filter i i i is detecting vertical textures in the image. Then G i i G_{ii} Gii measures how common vertical textures are in the image as a whole: If G i i G_{ii} Gii is large, this means that the image has a lot of vertical texture.
By capturing the prevalence of different types of features ( G i i G_{ii} Gii), as well as how much different features occur together ( G i j G_{ij} Gij), the Style matrix G G G measures the style of an image.
计算后的结果是维度为(nC,nC)的矩阵,其中nC是过滤器的数量,Gij测量了过滤器ii的激活与过滤器j的激活具有多大的相似度。风格矩阵Gii的一个重要的部分是对角线的元素,它测量了有效的过滤器ii的多少。举个例子,假设过滤器ii检测的是图像中的垂直的纹理,那么Gii测量的是图像整体中常见的垂直纹理,如果Gii很大,这意味着图像有很多垂直纹理。通过捕捉不同类型的特征(Gii)的多少,以及总共出现了多少不同的特征(Gij),那么风格矩阵G就测量的是整个图片的风格。
Exercise:
Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram
我们现在来使用tensorflow实现计算矩阵A的风格矩阵,计算公式是这样的。
matrix of A is G A = A A T G_A = AA^T GA=AAT. If you are stuck, take a look at Hint 1 and Hint 2.
# GRADED FUNCTION: gram_matrix“格拉姆矩阵
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
#tf.matmul() Multiplies matrix `a` by matrix `b`, producing `a` * `b`.
GA = tf.matmul(A,A,transpose_b = True)
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
GA = [[ 6.422305 -4.429122 -2.096682]
[-4.429122 19.465837 19.563871]
[-2.096682 19.563871 20.686462]]
Expected Output:
**GA** | [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] |
After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the “style” image S and that of the “generated” image G. For now, we are using only a single hidden layer a [ l ] a^{[l]} a[l], and the corresponding style cost for this layer is defined as:
(2) J s t y l e [ l ] ( S , G ) = 1 4 × n C 2 × ( n H × n W ) 2 ∑ i = 1 n C ∑ j = 1 n C ( G i j ( S ) − G i j ( G ) ) 2 J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} Jstyle[l](S,G)=4×nC2×(nH×nW)21i=1∑nCj=1∑nC(Gij(S)−Gij(G))2(2)
where G ( S ) G^{(S)} G(S) and G ( G ) G^{(G)} G(G) are respectively the Gram matrices of the “style” image and the “generated” image, computed using the hidden layer activations for a particular hidden layer in the network.
Exercise: Compute the style cost for a single layer.
Instructions: The 3 steps to implement this function are:
X.get_shape().as_list()
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S,[n_H*n_W,n_C]))
a_G = tf.transpose(tf.reshape(a_G,[n_H*n_W,n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = (1/(4*n_C*n_C*n_H*n_H*n_W*n_W))*(tf.reduce_sum(tf.square(tf.subtract(GS,GG))))
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
J_style_layer = 9.190278
Expected Output:
**J_style_layer** | 9.19028 |
So far you have captured the style from only one layer. We’ll get better results if we “merge” style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image G G G. But for now, this is a pretty reasonable default:
到目前为止,您只从一个层捕获了样式。
如果我们从几个不同的层次“合并”样式成本,我们将得到更好的结果。
完成这个练习之后,随时回来,尝试不同的权重来看看它改变生成的图像g .但是现在,这是一个相当合理的默认值
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
You can combine the style costs for different layers as follows:
J s t y l e ( S , G ) = ∑ l λ [ l ] J s t y l e [ l ] ( S , G ) J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G) Jstyle(S,G)=l∑λ[l]Jstyle[l](S,G)
where the values for λ [ l ] \lambda^{[l]} λ[l] are given in STYLE_LAYERS
.
We’ve implemented a compute_style_cost(…) function. It simply calls your compute_layer_style_cost(...)
several times, and weights their results using the values in STYLE_LAYERS
. Read over it to make sure you understand what it’s doing.
def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
"""
# initialize the overall style cost
#初始化所有的成本值
J_style = 0
for layer_name, coeff in STYLE_LAYERS: #如上方的定义可知:STYLE_LAYERS是一个字典
# Select the output tensor of the currently selected layer
out = model[layer_name] #此处和上方一样,导入训练好的模型,从而获得模型里面的参数
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
#现在a_G还只是一个空架子,回头创建session将数据输入以后,才会创建
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
Note: In the inner-loop of the for-loop above, a_G
is a tensor and hasn’t been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.
需要注意的是:在上面for循环的内循环中,a_G是一个张量,还没有被计算。当我们在下面的model_nn()中运行TensorFlow图时,将在每次迭代中对其进行评估和更新。
需要记住的是:
(1)图像的风格可以用隐藏层激活的Gram矩阵来表示,然而我们可以结合多个不同层的成本来获得更好的结果,这与内容表示相反,其通常只使用单个隐藏层就足够了。
(2)最小化风格成本将导致图像GG遵循图像SS的样式。
Finally, let’s create a cost function that minimizes both the style and the content cost. The formula is:
J ( G ) = α J c o n t e n t ( C , G ) + β J s t y l e ( S , G ) J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G) J(G)=αJcontent(C,G)+βJstyle(S,G)
Exercise: Implement the total cost function which includes both the content cost and the style cost.
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (≈1 line)
J = alpha*J_content+beta*J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
J = 35.34667875478276
Expected Output:
**J** | 35.34667875478276 |
Finally, let’s put everything together to implement Neural Style Transfer!
Here’s what the program will have to do:
精神风格迁移算法流程:
You’ve previously implemented the overall cost J ( G ) J(G) J(G). We’ll now set up TensorFlow to optimize this with respect to G G G. To do so, your program has to reset the graph and use an “Interactive Session”. Unlike a regular session, the “Interactive Session” installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code.
Lets start the interactive session.
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
Let’s load, reshape, and normalize our “content” image (the Louvre museum picture):
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
"""Entry point for launching an IPython kernel.
Let’s load, reshape and normalize our “style” image (Claude Monet’s painting):
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: `imread` is deprecated!
`imread` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imread`` instead.
"""Entry point for launching an IPython kernel.
Now, we initialize the “generated” image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the “generated” image more rapidly match the content of the “content” image. (Feel free to look in nst_utils.py
to see the details of generate_noise_image(...)
; to do so, click “File–>Open…” at the upper-left corner of this Jupyter notebook.)
现在,我们将“生成”的图像初始化为从content_image创建的带噪声的图像。
通过初始化生成图像的像素,使其主要是噪声,但仍与内容图像略有关联,这将有助于“生成”图像的内容更快地匹配“内容”图像的内容。
(可以在nst_utils.py中查看generate_noise e_image(…)的详细信息;
为此,请单击本木星笔记本左上角的“File——>Open…”)。
generated_image = generate_noise_image(content_image) #调用nst_utils.py中的函数,生成一张与内容有关的噪声图片
imshow(generated_image[0])
W0902 19:04:06.967033 4644 image.py:648] Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
[外链图片转存失败(img-56pdNbLS-1567427174092)(output_47_2.png)]
Next, as explained in part (2), let’s load the VGG16 model.
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
To get the program to compute the content cost, we will now assign a_C
and a_G
to be the appropriate hidden layer activations. We will use layer conv4_2
to compute the content cost. The code below does the following:
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
#说白了就是a_C不变,a_G是变的,随着循环迭代的范围而改变。
Note: At this point, a_G is a tensor and hasn’t been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
注意:在这一点上,a_G是一个张量,没有被求过值。
当我们运行下面model_nn()中的Tensorflow图时,它将在每次迭代中被评估和更新。
# Assign the input of the model to be the "style" image
#将风格图像作为VGG模型的输入
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
Exercise: Now that you have J_content and J_style, compute the total cost J by calling total_cost()
. Use alpha = 10
and beta = 40
.
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha = 10, beta = 40)
### END CODE HERE ###
You’d previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. See reference
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
Exercise: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line) 运行带噪声的输入图像
sess.run(model["input"].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line) 产生把数据输入模型后生成的图像
generated_image = sess.run(model["input"])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
model_nn(sess, generated_image)
Iteration 0 :
total cost = 5792469000.0
content cost = 7561.0654
style cost = 144809820.0
Iteration 20 :
total cost = nan
content cost = nan
style cost = nan
Iteration 40 :
total cost = nan
content cost = nan
style cost = nan
Iteration 60 :
total cost = nan
content cost = nan
style cost = nan
Iteration 80 :
total cost = nan
content cost = nan
style cost = nan
Iteration 100 :
total cost = nan
content cost = nan
style cost = nan
Iteration 120 :
total cost = nan
content cost = nan
style cost = nan
Iteration 140 :
total cost = nan
content cost = nan
style cost = nan
Iteration 160 :
total cost = nan
content cost = nan
style cost = nan
Iteration 180 :
total cost = nan
content cost = nan
style cost = nan
array([[[[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
...,
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]],
[[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
...,
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]],
[[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
...,
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]],
...,
[[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
...,
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]],
[[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
...,
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]],
[[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
...,
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]]]], dtype=float32)
Expected Output:
**Iteration 0 : ** | total cost = 5.05035e+09 content cost = 7877.67 style cost = 1.26257e+08 |
You’re done! After running this, in the upper bar of the notebook click on “File” and then “Open”. Go to the “/output” directory to see all the saved images. Open “generated_image” to see the generated image!
You should see something the image presented below on the right:
We didn’t want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images.
Here are few other examples:
The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night)
The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan.
A scientific study of a turbulent fluid with the style of a abstract blue fluid painting.
Finally, you can also rerun the algorithm on your own images!
To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here’s what you should do:
content_image = scipy.misc.imread("images/louvre.jpg")
style_image = scipy.misc.imread("images/claude-monet.jpg")
to:
content_image = scipy.misc.imread("images/my_content.jpg")
style_image = scipy.misc.imread("images/my_style.jpg")
You can also tune your hyperparameters:
Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network’s parameters. Deep learning has many different types of models and this is only one of them!
What you should remember: - Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image - It uses representations (hidden layer activations) based on a pretrained ConvNet. - The content cost function is computed using one hidden layer's activations. - The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers. - Optimizing the total cost function results in synthesizing new images.This was the final programming exercise of this course. Congratulations–you’ve finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models!
The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user “log0” also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team.