西瓜书第五章总结

西瓜书第五章总结

  • 前向传播与反向传播
  • 激活函数
  • BP神经网络算法实现
  • 参考

前向传播与反向传播

神经网络包括输入层、隐藏层、输出层
在这里插入图片描述
前向传播即首先随机初始化权重,然后由输入层向前进行计算,得到输出层结果。反向传播即通过输出层结果与预期结果的误差,求得误差对于各权重的偏导。最后通过梯度下降法调整权重,这是一次前向与反向传播。
主要流程如图所示。
在这里插入图片描述

激活函数

常用的激活函数如下:
1.sigmod函数
在这里插入图片描述
2.双曲函数tanh(x)
在这里插入图片描述

BP神经网络算法实现

该网络只是一个两层的神经网络,实现异或,代码来自
https://blog.csdn.net/zhouzx2010/article/details/71126800

import numpy as np
def tanh(x):
return np.tanh(x)
def tanh_derivative(x):
return 1.0 - np.tanh(x) * np.tanh(x)
def logistic(x):
return 1 / (1 + np.exp(-x))
def logistic_derivative(x):
return logistic(x) * (1 - logistic(x) )
class NeuralNetwork:
def init(self, layers, activation=‘tanh’):
if activation == ‘Logistic’:
self.activation = logistic
self.activation_deriv = logistic_derivative
elif activation == ‘tanh’:
self.activation = tanh
self.activation_deriv = tanh_derivative
self.weights = []
for i in range(1, len(layers)-1):
# [0,1) * 2 - 1 => [-1,1) => * 0.25 => [-0.25,0.25)
self.weights.append( (2np.random.random((layers[i-1] + 1, layers[i] + 1 ))-1 ) * 0.25 )
self.weights.append( (2
np.random.random((layers[i] + 1, layers[i+1] ))-1 ) * 0.25 )
def fit(self, X, y, learning_rate=0.2, epochs = 10000):
X = np.atleast_2d(X)
# X = temp
X = np.column_stack((X, np.ones(len(X))))
y = np.array(y)
for k in range(epochs):
i = np.random.randint(X.shape[0])
a = [X[i]]
# 正向计算
for l in range(len(self.weights)):
a.append(self.activation( np.dot(a[l], self.weights[l])) )
# 反向传播
error = y[i] - a[-1]
deltas = [error * self.activation_deriv(a[-1])]
# starting backprobagation
layerNum = len(a) - 2
for j in range(layerNum, 0, -1): # 倒数第二层开始
deltas.append(deltas[-1].dot(self.weights[j].T) * self.activation_deriv(a[j]))
# deltas.append(deltas[-(layerNum+1-j)].dot(self.weights[j].T) * self.activation_deriv(a[j]))
deltas.reverse()
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
def predict(self, x):
x = np.array(x)
temp = np.ones(x.shape[0] + 1)
temp[0:-1] = x
a = temp
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a

参考

[1]. 周志华.[机器学习]
[2].https://blog.csdn.net/zhouzx2010/article/details/71126800

你可能感兴趣的:(机器学习)