手撕神经网络:从零开始实现一个简单的神经网络(python)

手撕神经网络:从零开始实现一个简单的神经网络

1. 前言

现在有很多深度学习平台可以用,甚至我们根本不需要知道网络背后是怎么运行的,就可以训练出我们想要的模型,但是从学习的角度,从零开始写一个简单的神经网络是有必要的,它将有助于理解神经网络的工作原理。

之前有写过基于 TF 的全连接神经网络的实现,可以参考深度学习笔记——全连接神经网络样例程序及详细注释。但是这里将不借助任何深度学习平台来实现一个全连接神经网络,并用这个网络来实现分类任务。

本篇文章主要参考 Implementing a Neural Network from Scratch in Python – An Introduction。

另外如果你想实现一个卷积神经网络,可以参考 CNN-from-Scratch。

2. 网络结构

网络的结构很简单,如下图所示,两层全连接神经网络,激活函数为 t a n h tanh tanh。优化算法为 full batch SGD,没有加 momentum,关于优化算法可以参考 深度学习中常用的优化算法(SGD, Nesterov,Adagrad,RMSProp,Adam)总结。参数初始化方式选用最简单的随机初始化。

在这里插入图片描述
手撕神经网络:从零开始实现一个简单的神经网络(python)_第1张图片

用这个网络解决一个二分类问题,数据直接借助 sklearn 生成,借助 matplotlib 来可视化分类的结果。整个代码用 python(ipython)实现。

整个网络的前向传播过程如下:

z 1 = x W 1 + b 1 a 1 = tanh ⁡ ( z 1 ) = e z 1 − e − z 1 e z 1 + e − z 1 z 2 = a 1 W 2 + b 2 y ^ = a 2 = s o f t m a x ( z 2 ) \begin{aligned} z_1 & = xW_1 + b_1 \\ a_1 & = \tanh(z_1) = \frac{e^{z_1} - e^{-{z_1}}}{e^{z_1} + e^{-{z_1}}} \\ z_2 & = a_1W_2 + b_2 \\ \hat{y} & = a_2 = \mathrm{softmax}(z_2) \end{aligned} z1a1z2y^=xW1+b1=tanh(z1)=ez1+ez1ez1ez1=a1W2+b2=a2=softmax(z2)

损失函数为交叉熵,因为网络的最后是 softmax,所以交叉熵损失函数的表达式可以写为:

l o s s = − ∑ k = 1 K y k l n y k ^ loss = - \sum_{k=1}^{K}y_k ln\hat{y_k} loss=k=1Kyklnyk^

优化算法选用 full batch SGD(可以参考 深度学习中常用的优化算法(SGD,Nesterov,Adagrad,RMSProp,Adam)总结),当然实际使用中不太可能用这种优化算法,这里因为数据量小,所以直接采用这种方法。

反向传播根据链式法则(可以参考 CS231n Convolutional Neural Networks for Visual Recognition )一步步求取即可,可能稍微麻烦点的是对于 s o f t m a x softmax softmax 函数的求导,其具体的求导过程可以参考Softmax函数与交叉熵。反向传播的表达式如下:

δ 3 = ∂ L ∂ z 2 = y ^ − y δ 2 = ∂ L ∂ z 1 = ( 1 − tanh ⁡ 2 z 1 ) ∘ δ 3 W 2 T ∂ L ∂ W 2 = a 1 T δ 3 ∂ L ∂ b 2 = δ 3 ∂ L ∂ W 1 = x T δ 2 ∂ L ∂ b 1 = δ 2 \begin{aligned} & \delta_3 = \frac{\partial{L}}{\partial{z_2}} = \hat{y} - y \\ & \delta_2 = \frac{\partial{L}}{\partial{z_1}} = (1 - \tanh^2z_1) \circ \delta_3W_2^T \\ & \frac{\partial{L}}{\partial{W_2}} = a_1^T \delta_3 \\ & \frac{\partial{L}}{\partial{b_2}} = \delta_3\\ & \frac{\partial{L}}{\partial{W_1}} = x^T \delta_2\\ & \frac{\partial{L}}{\partial{b_1}} = \delta_2 \\ \end{aligned} δ3=z2L=y^yδ2=z1L=(1tanh2z1)δ3W2TW2L=a1Tδ3b2L=δ3W1L=xTδ2b1L=δ2

3. 数据和分类的效果

下面是我们用 sklearn 生成的数据,第二幅图是利用我们的 nn 网络对上面的数据分类的结果。
手撕神经网络:从零开始实现一个简单的神经网络(python)_第2张图片
手撕神经网络:从零开始实现一个简单的神经网络(python)_第3张图片

4. Code

ipython 代码如下。也可以直接参考 nn-from-scratch。虽然这只是一个简单的 nn 模型,但是也可以体会调参的乐趣了,比如设置小的学习率,或者给 SGD 加上 momentum,增加神经元的个数(nn_hdim),或者增加神经元的层数等都能减小 loss。

# Package imports
import matplotlib.pyplot as plt
import numpy as np
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib


# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)


# Generate a dataset and plot it
np.random.seed(0)
# Generate 200 points
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)


num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality

# Gradient descent parameters (I picked these by hand)
epsilon = 0.01 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength


# Helper function to plot a decision boundary.
def plot_decision_boundary(pred_func):
    # Set min and max values and give it some padding
    x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
    y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
    h = 0.01
    # Generate a grid of points with distance h between them
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
    # Predict the function value for the whole gid
    Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
    Z = Z.reshape(xx.shape)
    # Plot the contour and training examples
    plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
    plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral)


# Helper function to evaluate the total loss on the dataset
def calculate_loss(model):
    W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
    # Forward propagation to calculate our predictions
    z1 = X.dot(W1) + b1
    a1 = np.tanh(z1)
    z2 = a1.dot(W2) + b2
    exp_scores = np.exp(z2)
    probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
    # Calculating the loss
    corect_logprobs = -np.log(probs[range(num_examples), y])
    data_loss = np.sum(corect_logprobs)
    # Add regulatization term to loss (optional)
    data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2)))
    return 1./num_examples * data_loss


# Helper function to predict an output (0 or 1)
def predict(model, x):
    W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
    # Forward propagation
    z1 = x.dot(W1) + b1
    a1 = np.tanh(z1)
    z2 = a1.dot(W2) + b2
    exp_scores = np.exp(z2)
    probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
    return np.argmax(probs, axis=1)


# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
    
    # Initialize the parameters to random values. We need to learn these.
    np.random.seed(0)
    W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
    b1 = np.zeros((1, nn_hdim))
    W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
    b2 = np.zeros((1, nn_output_dim))

    # This is what we return at the end
    model = {}
    
    # Gradient descent. For each batch...
    for i in range(0, num_passes):

        # Forward propagation
        z1 = X.dot(W1) + b1
        a1 = np.tanh(z1)
        z2 = a1.dot(W2) + b2
        exp_scores = np.exp(z2)
        probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)

        # Backpropagation
        delta3 = probs
        delta3[range(num_examples), y] -= 1
        dW2 = (a1.T).dot(delta3)
        db2 = np.sum(delta3, axis=0, keepdims=True)
        delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2))
        dW1 = np.dot(X.T, delta2)
        db1 = np.sum(delta2, axis=0)

        # Add regularization terms (b1 and b2 don't have regularization terms)
        dW2 += reg_lambda * W2
        dW1 += reg_lambda * W1

        # Gradient descent parameter update
        W1 += -epsilon * dW1
        b1 += -epsilon * db1
        W2 += -epsilon * dW2
        b2 += -epsilon * db2
        
        # Assign new parameters to the model
        model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
        
        # Optionally print the loss.
        # This is expensive because it uses the whole dataset, so we don't want to do it too often.
        if print_loss and i % 1000 == 0:
          print("Loss after iteration %i: %f" %(i, calculate_loss(model)))
    
    return model


# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)

# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")

你可能感兴趣的:(深度学习)