Deeplearning.ai Course-2 Week-3 Programming Assignment

前言:

文章以Andrew Ng 的 deeplearning.ai 视频课程为主线,记录Programming Assignments 的实现过程。相对于斯坦福的CS231n课程,Andrew的视频课程更加简单易懂,适合深度学习的入门者系统学习!

本次作业主要涉及到TensorFlow的简单使用,目前有很多深度学习的框架如caffe,TensorFlow等,使用这些框架能加速网络的搭建,降低出错的可能性,达到事半功倍的效果!

1.1 Build the first neural network in tensorflow

这次训练是手势识别,识别的label为0-5,六种手势:

Deeplearning.ai Course-2 Week-3 Programming Assignment_第1张图片

我们首先对数据集进行简单的处理:

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T

X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T

X_train = X_train_flatten/255.

X_test = X_test_flatten/255.

Y_train = convert_to_one_hot(Y_train_orig, 6)    #将0-5的label转化为one_hot形式

Y_test = convert_to_one_hot(Y_test_orig, 6)

Create placeholders:

def create_placeholders(n_x, n_y):

X = tf.placeholder(tf.float32,shape=[n_x,None],name="X")

Y = tf.placeholder(tf.float32,shape=[n_y,None],name="Y")

return X, Y

Initializing the parameters:

def initialize_parameters():

W1 = tf.get_variable("W1",[25,12288],initializer=tf.contrib.layers.xavier_initializer(seed=1))

b1 = tf.get_variable("b1",[25,1],initializer=tf.zeros_initializer())

W2 = tf.get_variable("W2", [12, 25], initializer=tf.contrib.layers.xavier_initializer(seed=1))

b2 = tf.get_variable("b2", [12, 1], initializer=tf.zeros_initializer())

W3 = tf.get_variable("W3", [6, 12], initializer=tf.contrib.layers.xavier_initializer(seed=1))

b3 = tf.get_variable("b3", [6, 1], initializer=tf.zeros_initializer())

parameters = {"W1": W1,

"b1": b1,

"W2": W2,

"b2": b2,

"W3": W3,

"b3": b3}

return parameters

Forward propagation in tensorflow:

def forward_propagation(X, parameters):

W1 = parameters['W1']

b1 = parameters['b1']

W2 = parameters['W2']

b2 = parameters['b2']

W3 = parameters['W3']

b3 = parameters['b3']

Z1 = tf.add(tf.matmul(W1,X),b1)                                           

A1 = tf.nn.relu(Z1)                                             

Z2 = tf.add(tf.matmul(W2,A1),b2)                                             

A2 = tf.nn.relu(Z2)                                             

Z3 = tf.add(tf.matmul(W3,A2),b3)                                              

return Z3

Compute cost:

def compute_cost(Z3, Y):

logits = tf.transpose(Z3)

labels = tf.transpose(Y)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=labels))

return cost

Backward propagation & parameter updates:

optimizer=tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)

_,c=sess.run([optimizer,cost],feed_dict={X:minibatch_X,Y:minibatch_Y})

Building the model:

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,

num_epochs = 1500, minibatch_size = 32, print_cost = True):

ops.reset_default_graph()                     

tf.set_random_seed(1)                            

seed = 3                                         

(n_x, m) = X_train.shape                          

n_y = Y_train.shape[0]                           

costs = []                                       

X, Y = create_placeholders(n_x,n_y)

# Initialize parameters

parameters = initialize_parameters()

Z3 = forward_propagation(X, parameters)

cost = compute_cost(Z3, Y)

optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)

init = tf.global_variables_initializer()

with tf.Session() as sess:

sess.run(init)

for epoch in range(num_epochs):

epoch_cost = 0.                      

num_minibatches = int(m / minibatch_size) 

seed = seed + 1

minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)

for minibatch in minibatches:

(minibatch_X, minibatch_Y) = minibatch

_ , minibatch_cost = sess.run([optimizer,cost],feed_dict={X:mini_batch_X,Y:mini_batch_Y}

epoch_cost += minibatch_cost / num_minibatches

if print_cost == True and epoch % 100 == 0:

print ("Cost after epoch %i: %f" % (epoch, epoch_cost))

if print_cost == True and epoch % 5 == 0:

costs.append(epoch_cost)

plt.plot(np.squeeze(costs))

plt.ylabel('cost')

plt.xlabel('iterations (per tens)')

plt.title("Learning rate =" + str(learning_rate))

plt.show()

parameters = sess.run(parameters)

print ("Parameters have been trained!")

correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))

print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))

return parameters

从上面可以看出在用TensorFlow编写代码时,有4个步骤:

1.创建tensor变量

2.创建session

3.初始化session

4.运行session

最后附上我作业的得分,表示我程序没有问题,如果觉得我的文章对您有用,请随意打赏,我将持续更新Deeplearning.ai的作业!

Deeplearning.ai Course-2 Week-3 Programming Assignment_第2张图片

你可能感兴趣的:(Deeplearning.ai Course-2 Week-3 Programming Assignment)