deeplearning.ai——TensorFlow指南

1 - Exploring the Tensorflow Library

导入库:

import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
import os
import tensorflow.python.util.deprecation as deprecation
deprecation._PRINT_DEPRECATION_WARNINGS = False
import tensorflow as tf
if type(tf.contrib) != type(tf): tf.contrib._warning = None

%matplotlib inline
np.random.seed(1)

来看一个计算单个训练样本损失的例子:

y_hat = tf.constant(36, name='y_hat')            # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y')                    # Define y. Set to 39

loss = tf.Variable((y - y_hat)**2, name='loss')  # Create a variable for the loss

init = tf.global_variables_initializer()         # When init is run later (session.run(init)),
                                                 # the loss variable will be initialized and ready to be computed
with tf.Session() as session:                    # Create a session and print the output
    session.run(init)                            # Initializes the variables
    print(session.run(loss))                     # Prints the loss

结果为9

在TensorFlow中编写和运行程序有以下步骤:

  1. 创建尚未执行/计算的张量(变量)。
  2. 编写在这些张量之间进行的操作。
  3. 初始化张量。
  4. 创建一个会话。
  5. 运行会话,将会运行以上所编写的操作。

因此,当我们为损失创建一个变量时,我们简单地将损失定义为其他量的函数,但没有计算其值。为了计算它,需要运行init=tf.global_variables_initializer(),这样就初始化了损失变量,最后一行能够计算损失的值并且打印出来。

1.1 - Linear function

计算如下式子:Y=WX+b,其中W和X是随机矩阵,b是随机向量。

练习:计算WX+b,其中W,X和b由随机正态分布中提取。W的形状为(4,3),X为(3,1),b为(4,1),举个例子,下面是如何定义一个形状为(3,1)的常量X:

x=tf.constant(np.random.randn(3,1), name="X")

X = tf.constant(np.random.randn(3, 1), name='X')
W = tf.constant(np.random.randn(4, 3), name='W')
b = tf.constant(np.random.randn(4, 1), name='b')
Y = tf.matmul(W, X) + b
    
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
    
sess = tf.Session()
result = sess.run(Y)

1.2 - Computing the sigmoid

Tensorflow提供了许多常用的神经网络函数包括tf.sigmoid和tf.softmax等等,接下来计算sigmoid。

使用占位符变量x,当运行会话时,需要使用feed字典来传入输入z,作业中要做如下几件事:

  1. 创建一个占位符x
  2. 定义操作,使用tf.sigmoid计算
  3. 运行会话

练习:实现sigmoid函数。

两种创建并使用会话的方法:

deeplearning.ai——TensorFlow指南_第1张图片

# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name='x')

# compute sigmoid(x)
sigmoid = tf.sigmoid(x)

# Create a session, and run it. Please use the method 2 explained above. 
# You should use a feed_dict to pass z's value to x. 
with tf.Session() as sess:
    # Run session and call the output "result"
    result = sess.run(sigmoid, feed_dict={x:z})

1.3 - Computing the Cost

也可以使用内置函数来计算神经网络的损失,计算下列损失函数只需要一行代码:

练习:实现交叉熵损失函数。

# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name='z')
y = tf.placeholder(tf.float32, name='y')
    
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
    
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
    
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z:logits, y:labels})
    
# Close the session (approx. 1 line). See method 1 above.
sess.close()

1.4 - Using One Hot encodings

将y向量转化为one-hot向量:

deeplearning.ai——TensorFlow指南_第2张图片

只需要一行代码:

tf.one_hot(labels, depth, axis)

练习:实现下面的函数,获取标签的一个向量和类别的总数C,并返回一个one-hot编码。

# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name='C')
    
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
    
# Create the session (approx. 1 line)
sess = tf.Session()
    
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
    
# Close the session (approx. 1 line). See method 1 above.
sess.close()

1.5 - Initialize with zeros and ones

初始化一个全0向量和一个全1向量,将要用到的是tf.ones()和tf.zeros()函数。

练习:实现下面的函数以形成一个形状并返回数组(形状的维数为1)。

# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape=shape)
    
# Create the session (approx. 1 line)
sess = tf.Session()
    
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
    
# Close the session (approx. 1 line). See method 1 above.
sess.close()

 

2 - Building your first neural network in tensorflow

使用Tensorflow构建一个神经网络,一个Tensorflow模块需要实现的两步:

  1. 创建一个计算图
  2. 运行它

2.0 - Problem statement: SIGNS Dataset

  • 训练集:1080张手势图片(64 x 64像素)表示0到5之间的数字(每个数字180张图片)
  • 测试集:120张手势图片(64 x 64像素)表示0到5之间的数字(每个数字20张图片)

这就是SIGNS数据集的一个子集,完整的数据集包含更多手势。

数据集中的原始图片:

deeplearning.ai——TensorFlow指南_第3张图片

载入数据:

# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

更改下面的索引并运行单元以可视化数据集中的一些示例。

# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))

结果为y = 5

deeplearning.ai——TensorFlow指南_第4张图片

像往常一样,将图像数据集展平,然后除以255将其归一化。除此之外,把每个标签转换为一个one-hot向量。运行下面的单元进行此操作。

# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)

print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))

结果为:

number of training examples = 1080

number of test examples = 120

X_train shape: (12288, 1080)

Y_train shape: (6, 1080)

X_test shape: (12288, 120)

Y_test shape: (6, 120)

注意12288来自于64×64×3,每张图片都是64×64像素,RGB3通道。

目标是建立一种能够高精度识别手势的算法。要做到这一点,您将构建一个TensorFlow模型,该模型几乎与以前在numpy中为cat识别构建的模型相同(但现在使用的是softmax输出)。这是一个比较Numpy实现和TensorFlow实现的好时机。

模型是LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX,SIGMOID输出层换成了SOFTMAX,当有两个以上的类时,由SIGMOID泛化为SOFTMAX层。

2.1 - Create placeholders

第一步是创建X和Y的占位符,使得之后运行会话时能喂入训练数据。

练习:创建占位符。

X = tf.placeholder(shape=[n_x, None], dtype=tf.float32)
Y = tf.placeholder(shape=[n_y, None], dtype=tf.float32)

2.2 - Initializing the parameters

第二步是初始化参数。

练习:使用Xavier初始化权重,Zero初始化偏置。

W1 = tf.get_variable(name='W1', shape=[25, 12288], 
                     initializer=tf.contrib.layers.xavier_initializer(seed=1))
b1 = tf.get_variable(name='b1', shape=[25, 1], 
                     initializer=tf.zeros_initializer())
W2 = tf.get_variable(name='W2', shape=[12, 25], 
                     initializer=tf.contrib.layers.xavier_initializer(seed=1))
b2 = tf.get_variable(name='b2', shape=[12, 1], 
                     initializer=tf.zeros_initializer())
W3 = tf.get_variable(name='W3', shape=[6, 12], 
                     initializer=tf.contrib.layers.xavier_initializer(seed=1))
b3 = tf.get_variable(name='b3', shape=[6, 1], 
                     initializer=tf.zeros_initializer())

2.3 - Forward propagation in tensorflow

实现前向传播模块,函数将接受一个参数字典,它将完成前向传递。

问题:需要注意的是,正向传播在z3处停止。其原因是,在Tensorflow中,最后一个线性层输出作为计算损失的函数的输入。因此,您不需要A3!

Z1 = tf.matmul(W1, X) + b1                          # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1)                                 # A1 = relu(Z1)
Z2 = tf.matmul(W2, A1) + b2                         # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2)                                 # A2 = relu(Z2)
Z3 = tf.matmul(W3, A2) + b3                         # Z3 = np.dot(W3,Z2) + b3

2.4 Compute cost

问题:实现损失函数。

  • 重要的是要知道tf.nn.softmax_cross_entropy_with_logits的输入"logits"和"labels"的形状应该是(number of examples, num_classes),因此需要转置Z3和Y。
  • 此外,tf.reduce_mean会对所有样本进行求和。
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
    
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))

2.5 - Backward propagation & parameter updates

在计算损失函数之后。您将创建一个“optimizer”对象。在运行tf.session时,您必须调用这个对象以及损失。当调用时,它将使用所选的优化方法和学习率对给定的损失进行优化。

举个例子,梯度下降的优化器如下:

optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)

为了优化需要如下一步:

_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})

2.6 - Building the model

练习:实现整个模型。

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
          num_epochs = 1500, minibatch_size = 32, print_cost = True):
    """
    Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
    
    Arguments:
    X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
    Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
    X_test -- training set, of shape (input size = 12288, number of training examples = 120)
    Y_test -- test set, of shape (output size = 6, number of test examples = 120)
    learning_rate -- learning rate of the optimization
    num_epochs -- number of epochs of the optimization loop
    minibatch_size -- size of a minibatch
    print_cost -- True to print the cost every 100 epochs
    
    Returns:
    parameters -- parameters learnt by the model. They can then be used to predict.
    """
    
    tf.reset_default_graph()                         # to be able to rerun the model without overwriting tf variables
    tf.set_random_seed(1)                             # to keep consistent results
    seed = 3                                          # to keep consistent results
    (n_x, m) = X_train.shape                          # (n_x: input size, m : number of examples in the train set)
    n_y = Y_train.shape[0]                            # n_y : output size
    costs = []                                        # To keep track of the cost
    
    # Create Placeholders of shape (n_x, n_y)
    ### START CODE HERE ### (1 line)
    X, Y = create_placeholders(n_x, n_y)
    ### END CODE HERE ###

    # Initialize parameters
    ### START CODE HERE ### (1 line)
    parameters = initialize_parameters()
    ### END CODE HERE ###
    
    # Forward propagation: Build the forward propagation in the tensorflow graph
    ### START CODE HERE ### (1 line)
    Z3 = forward_propagation(X, parameters)
    ### END CODE HERE ###
    
    # Cost function: Add cost function to tensorflow graph
    ### START CODE HERE ### (1 line)
    cost = compute_cost(Z3, Y)
    ### END CODE HERE ###
    
    # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
    ### START CODE HERE ### (1 line)
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
    ### END CODE HERE ###
    
    # Initialize all the variables
    init = tf.global_variables_initializer()

    # Start the session to compute the tensorflow graph
    with tf.Session() as sess:
        
        # Run the initialization
        sess.run(init)
        
        # Do the training loop
        for epoch in range(num_epochs):

            epoch_cost = 0.                       # Defines a cost related to an epoch
            num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
            seed = seed + 1
            minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)

            for minibatch in minibatches:

                # Select a minibatch
                (minibatch_X, minibatch_Y) = minibatch
                
                # IMPORTANT: The line that runs the graph on a minibatch.
                # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
                ### START CODE HERE ### (1 line)
                _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X:minibatch_X,
                                                                           Y:minibatch_Y})
                ### END CODE HERE ###
                
                epoch_cost += minibatch_cost / num_minibatches

            # Print the cost every epoch
            if print_cost == True and epoch % 100 == 0:
                print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
            if print_cost == True and epoch % 5 == 0:
                costs.append(epoch_cost)
                
        # plot the cost
        plt.plot(np.squeeze(costs))
        plt.ylabel('cost')
        plt.xlabel('iterations (per tens)')
        plt.title("Learning rate =" + str(learning_rate))
        plt.show()

        # lets save the parameters in a variable
        parameters = sess.run(parameters)
        print ("Parameters have been trained!")

        # Calculate the correct predictions
        correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))

        # Calculate accuracy on the test set
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

        print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
        print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
        
        return parameters

结果如下:

deeplearning.ai——TensorFlow指南_第5张图片

思考:

  • 模型似乎过拟合了,考虑到训练集和测试集准确率的不同,可以加上L2正则化或者dropout正则化来减少过拟合。
  • 把会话看作是训练模型的代码块。每次在mini-batch上运行会话时,它都会训练参数。总的来说,在获得良好的训练参数之前,您已经多次运行该会话(1500个epoch)。

2.7 - Test with your own image (optional / ungraded exercise)

import scipy
from PIL import Image
from scipy import ndimage

## START CODE HERE ## (PUT YOUR IMAGE NAME) 
my_image = "thumbs_up.jpg"
## END CODE HERE ##

# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)

plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))

结果为Your algorithm predicts: y = 3

deeplearning.ai——TensorFlow指南_第6张图片

你可能感兴趣的:(机器学习,Tensorflow)