吴恩达Deep Learning第二课作业(第二周)

目录链接:吴恩达Deep Learning学习笔记目录

  1.Gradient Descent
  2.mini-batch Gradient Descent
  3.Momentum
  4.Adam
  5.Model with different optimization algorithms

1. Gradient Descent

import numpy as np
import matplotlib.pyplot as plt
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCase import *

plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
def update_params_with_gd(params,grads,learning_rate):
    L = len(params) // 2
    for layer in range(L):
        params["W"+str(layer + 1)] = params["W"+str(layer + 1)] - learning_rate * grads["dW"+str(layer + 1)]
        params["b"+str(layer + 1)] = params["b"+str(layer + 1)] - learning_rate * grads["db"+str(layer + 1)]
        
    return params

(1) (Batch) Gradient Descent
  Batch Gradient Descent 相当于mini-batch 梯度下降法 batch_size为整个训练集大小m:

def model_with_batch_gd(X,Y,layer_dims,epochs,learning_rate):
    params = initialize_parameters(layer_dims)  
    for epoch in range(epochs):
        AL,caches = forward_propagation(X,params)
        
        cost = compute_cost(AL,Y)
        
        grads = backward_propagation(AL,caches,params)
        
        params = update_params_with_gd(params,grads,learning_rate)
                        
    return params

(2)Stochastic Gradient Descent
  需要在每一epoch中,每次输入一个样本,计算这个样本对应的loss,然后进行梯度下降,更新所有参数,循环m(样本数)次后完成一个epoch。SGD虽然计算速度快,但由于其下降时是振荡的,收敛速度并不快。

def model_with_batch_gd(X,Y,layer_dims,epochs,learning_rate):
    m = X.shape[1]
    params = initialize_parameters(layer_dims)  
    for epoch in range(epochs):
        for sample in range(m):
            AL,caches = forward_propagation(X[:,sample],params)
            cost = compute_cost(AL,Y[:,sample])
            grads = backward_propagation(AL,caches)
            params = update_params_with_gd(params,grads,learning_rate)
    
    return params

2. mini-batch Gradient Descent

  实际上,我们可以采用比较折中的方法,batch size取1到m之间,兼顾Batch gradient descent 和 Stochastic gradient descent的优点,振荡较小,计算速度快。

  ①Shuffle:将样本的顺序打乱,确保样本被随机的分割仅不同的mini-batch;

  ②Partition:将数据集分割为mini-batch,大小为mini_batch_size,当最后一组样本数量小于mini_batch_size时,将所有剩下样本作为一个mini-batch。
数据集分割:
def random_mini_batch(X,Y,mini_batch_size = 64,seed = 0):
    np.random.seed(seed)
    m = X.shape[1]
    mini_batches = []
    
    permutation = list(np.random.permutation(m))#将0-m的整数随机排序
    shuffled_X = X[:,permutation]
    shuffled_Y = Y[:,permutation].reshape((1,m))
    
    num_complete_mini_batches = math.floor(m / mini_batch_size)#向下取整
    
    for i in range(num_complete_mini_batches):
        mini_batch_X = shuffled_X[:,i * mini_batch_size : (i + 1) * mini_batch_size]
        mini_batch_Y = shuffled_Y[:,i * mini_batch_size : (i + 1) * mini_batch_size]
        mini_batch = (mini_batch_X,mini_batch_Y)
        mini_batches.append(mini_batch)
    
    #求m被mini_batch_size除后的余数即最后一个minibatch的样本数
    if m % mini_batch_size != 0:
        num_samples_of_end = m - mini_batch_size * num_complete_mini_batches
        mini_batch_X = shuffled_X[:,mini_batch_size * num_complete_mini_batches :]
        mini_batch_Y = shuffled_Y[:,mini_batch_size * num_complete_mini_batches :]
        mini_batch = (mini_batch_X,mini_batch_Y)
        mini_batches.append(mini_batch)
        
    return mini_batches

3. Momentum

  由于mini-batch梯度下降法仅仅采用样本集的一个子集,所以梯度的更新方向是振荡的,采用Momentum可以减小这种振荡。Momentum通过计算前面mini-batch的梯度的EWMA值来代替当前原梯度,将梯度数据进行平滑。

  Momentum计算规则:
  ①由于velocity初始化为0,所以初始几步迭代计算出的EWMA值有偏差,多次迭代后该偏差消失;如果β=0,则该算法退变为标准的梯度下降更新算法。
  ②如何选择β?

    β越大,则梯度下降越平滑,因为past gradient在参与计算EWMA值时,权值(越往前权值指数式下降)大足够起到作用的past gradient数量越多,但太大也会导致太平滑;
    β的取值范围一般为0.8-0.999,0.9一般为默认值;
    需要对β和α进行超参数搜索。

#初始化EWMA
def initialize_velocity(params):
    """
    function: initialize the veclocity as a dict containing dW1,dW2,...
    
    return: a dict of dW1,dW2,...
    """
    L = len(params) // 2
    velocity = {}
    
    for layer in range(L):
        velocity["dW" + str(layer + 1)] = np.zeros_like(params["W" + str(layer + 1)])
        velocity["db" + str(layer + 1)] = np.zeros_like(params["b" + str(layer + 1)])
    
    return velocity
#更新梯度
def update_params_with_momentum(params,grads,velocity,beta,learning_rate):
    """
    arguments:
        beta:the momentum hyperparam,scalar
    return:
        params: params by updated
        velocity:velocity by updated
    """
    L = len(params) // 2
    
    for layer in range(L):
        velocity["dW" + str(layer + 1)] = beta * velocity["dW" + str(layer + 1)] + (1 - beta) * grads["dW" + str(layer + 1)]
        velocity["db" + str(layer + 1)] = beta * velocity["db" + str(layer + 1)] + (1 - beta) * grads["db" + str(layer + 1)]
        
        params["W" + str(layer + 1)] = params["W" + str(layer + 1)] - learning_rate * velocity["dW" + str(layer + 1)]
        params["b" + str(layer + 1)] = params["b" + str(layer + 1)] - learning_rate * velocity["db" + str(layer + 1)]
    
    return params,velocity

4. Adam

  Adam结合了Monmentum和RMSProp,计算过程:

  ①计算EWMA,和修正后的EWMA并存储;
  ②计算EWMA的平方及其修正值;
  ③更新参数

def initialize_adam(params):
    """
    return:
        v: v["dW" + str(layer)] = ...,v["db" + str(layer)] = ...
        s: s["dW" + str(layer)] = ...,s["db" + str(layer)] = ...
    """
    L = len(params) // 2
    v = {}
    s = {}
    for layer in range(L):
        v["dW" + str(layer + 1)] = np.zeros_like(params["W" + str(layer + 1)])
        v["db" + str(layer + 1)] = np.zeros_like(params["b" + str(layer + 1)])
        
        s["dW" + str(layer + 1)] = np.zeros_like(params["W" + str(layer + 1)])
        s["db" + str(layer + 1)] = np.zeros_like(params["b" + str(layer + 1)]) 
        
    return v,s
def update_params_with_adam(params,grads,v,s,t,learning_rate = 0.01,beta1 = 0.9,beta2 = 0.999,epsilon = 1e-8):
    """
    t:num of iteration current
    """
    L = len(params) // 2
    v_corrected = {}
    s_corrected = {}
    
    for layer in range(L):
        v["dW" + str(layer + 1)] = beta1 * v["dW" + str(layer + 1)] + (1 - beta1) * grads["dW" + str(layer + 1)]
        v["db" + str(layer + 1)] = beta1 * v["db" + str(layer + 1)] + (1 - beta1) * grads["db" + str(layer + 1)]
        
        v_corrected["dW" + str(layer + 1)] = v["dW" + str(layer + 1)] / (1 - np.power(beta1,t))
        v_corrected["db" + str(layer + 1)] = v["db" + str(layer + 1)] / (1 - np.power(beta1,t))
        
        s["dW" + str(layer + 1)] = beta2 * s["dW" + str(layer + 1)] + (1 - beta2) * np.power(grads["dW" + str(layer + 1)],2)
        s["db" + str(layer + 1)] = beta2 * s["db" + str(layer + 1)] + (1 - beta2) * np.power(grads["db" + str(layer + 1)],2)
        
        s_corrected["dW" + str(layer + 1)] = s["dW" + str(layer + 1)] / (1 - np.power(beta2,t))
        s_corrected["db" + str(layer + 1)] = s["db" + str(layer + 1)] / (1 - np.power(beta2,t))
        
        params["W" + str(layer + 1)] = params["W" + str(layer + 1)] - learning_rate * v_corrected["dW" + str(layer + 1)] / np.sqrt(s["dW" + str(layer + 1)] + epsilon)
        params["b" + str(layer + 1)] = params["b" + str(layer + 1)] - learning_rate * v_corrected["db" + str(layer + 1)] / np.sqrt(s["db" + str(layer + 1)] + epsilon)
        
    return params,v,s

5.Model with different optimization algorithms

  训练集数据分布:


  模型(通过optimizer参数选择优化方法):

def model(X,Y,layer_dims,optimizer,learning_rate = 7e-4,mini_batch_size = 64,
          beta = 0.9,beta1 = 0.9,beta2 = 0.999,epsilon = 1e-8,epochs = 10000):
    L = len(layer_dims)
    costs = []
    t = 0
    seed = 10
    
    params = initialize_parameters(layer_dims)
    
    if optimizer == "gd":
        pass
    elif optimizer == "momentum":
        v = initialize_velocity(params)
    elif optimizer == "adam":
        v,s = initialize_adam(params)
    
    for epoch in range(epochs):
        seed = seed + 1
        mini_batches = random_mini_batch(X,Y,mini_batch_size,seed)
        
        for mini_batch in mini_batches:
            (mini_batch_X,mini_batch_Y) = mini_batch
            
            AL,caches = forward_propagation(mini_batch_X,params)
            
            cost = compute_cost(AL,mini_batch_Y)
            
            grads = backward_propagation(mini_batch_X,mini_batch_Y,caches)
            
            if optimizer == "gd":
                params = update_params_with_gd(params,grads,learning_rate)
            elif optimizer == "momentum":
                params,v = update_params_with_momentum(params,grads,v,beta,learning_rate)
            elif optimizer == "adam":
                t = t + 1
                params,v,s = update_params_with_adam(params,grads,v,s,t,learning_rate,beta1,beta2,epsilon) 
            
        if epoch % 1000 == 0:
            print("epoch: %d ,loss: %3.3f" % (epoch,cost))
        if epoch % 100 == 0:   
            costs.append(cost)
        
    plt.rcParams['figure.figsize'] = (15.0, 4.0)
    plt.subplot(1,2,1)
    plt.plot(costs)
    plt.ylabel('cost')
    plt.xlabel('epochs')
    plt.title("learning rate: "+str(learning_rate))
    
    plt.subplot(1,2,2)
    plt.title("Model with "+str(optimizer))
    axes = plt.gca()
    axes.set_xlim([-1.5, 2.5])
    axes.set_ylim([-1, 1.5])
    plot_decision_boundary(lambda x: predict_dec(params, x.T), X, np.squeeze(Y))
    
    return params

  结果对比:


  结论:
  由于learning_rate较小及数据集较为简单,standard gradient descent 和 momentum的结果相近,且两个算法还未收敛,需要更多的迭代次数;
  但是Adam展现出显著的效果,收敛速度较快,其优点是:
    ①对内存需求相对较低(但高于standard gradient descent
和momentum);
    ②就算是在较小的learning_ratex下,表现也很好。

你可能感兴趣的:(吴恩达Deep Learning第二课作业(第二周))