Demo 3: 梯度下降法
来源: B站 刘二大人
梯度下降法
# 任务:实现梯度下降(全部数据的平均损失)和随机梯度下降(单个数据的损失)
# 梯度下降
import numpy as np
import matplotlib.pyplot as plt
# 定义数据集
x = [1.0, 2.0, 3.0]
y = [2.0, 4.0, 6.0]
# 初始化权重
w = 1.0
# 定义函数
def forward(x):
return x * w
# 定义cost函数
def cost(x, y):
#全部损失之和
cost = 0
for x_, y_ in zip(x, y):
y_pred = forward(x_)
cost += (y_pred-y_) ** 2
cost /= len(x)
return cost
# 定义梯度下降
def grading(xs, ys):
grad = 0
for x, y in zip(xs, ys):
grad += 2 * x * (x * w - y)
grad /= len(xs)
return grad
# 保存一下权重和loss,后面画图呢
epoch_list = []
cost_list = []
print('predict (before traning', 4, forward(4))
# 迭代100轮,每轮都输出一下信息
for epoch in range(100):
cost_val = cost(x,y)
grad_val = grading(x,y)
w -= 0.01 * grad_val
print("epoch : ", epoch, "w : ",w,"loss : ",cost_val)
epoch_list.append(epoch)
cost_list.append(cost_val)
print('predict (after training', 4, forward(4))
plt.plot(epoch_list, cost_list)
plt.ylabel('cost')
plt.xlabel('epoch')
plt.show()
随机梯度下降法 :
随机梯度下降法在神经网络中被证明是有效的。效率较低(时间复杂度较高),学习性能较好。
随机梯度下降法和梯度下降法的主要区别在于:
1、损失函数由cost()更改为loss()。cost是计算所有训练数据的损失,loss是计算一个训练函数的损失。对应于源代码则是少了两个for循环。
2、梯度函数gradient()由计算所有训练数据的梯度更改为计算一个训练数据的梯度。
3、本算法中的随机梯度主要是指,每次拿一个训练数据来训练,然后更新梯度参数。本算法中梯度总共更新100(epoch)x3 = 300次。梯度下降法中梯度总共更新100(epoch)次。
# 随机梯度下降法实现
import matplotlib.pyplot as plt
# 定义数据集
x = [1.0, 2.0, 3.0]
y = [2.0, 4.0, 6.0]
# 初始化权重
w = 1.0
def forward(x):
return x * w
def loss(x,y):
y_pred = forward(x)
return (y_pred-y) **2
def gradient(x,y):
grad = 2 * x * (x*w-y)
return grad
epoch_list = []
loss_list = []
print('predict (before training)', 4, forward(4))
for epoch in range(100):
for x_, y_ in zip(x, y):
grad = gradient(x_, y_)
w -= 0.01*grad
print("\tgrad:", x_, y_, grad)
l = loss(x_, y_)
print('progress:', epoch, "w=", w, "loss = ", 1)
epoch_list.append(epoch)
loss_list.append(l)
print('predict (after training)', 4, forward(4))
plt.figure(figsize = (10,10))
plt.plot(epoch_list, loss_list)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()