梯度下降算法

用贪心算法求阶局部最优解,即w对应的MSE达到最小

梯度下降算法_第1张图片

 

梯度下降算法_第2张图片

 可以理解为:如果斜率为正,往下走就是减;斜率为负,往上走就是加。

更新:随机梯度下降算法

梯度下降算法_第3张图片

对每一个样本进行更新

因为已知三组数据,range100次,且随机梯度下降算法每个样本更新一次,所以,一共w会更新300次,但是原梯度下降算法cost仅将三组算得梯度求均值再进行range100,所以一共进行100次。 

# -*- coding: utf-8 -*-
# @Time    : 2023-07-10 14:46
# @Author  : yuer
# @FileName: exercise03.py
# @Software: PyCharm


import matplotlib.pyplot as plt


# 梯度下降法
#权重a设为0.01
# 训练数据 (对于x=4的情况进行预测)
x_data = [1.0,2.0,3.0]
y_data = [2.0,4.0,6.0]

# 初始w
w=1.0

# 线性模型
def forward(x):
    return x*w

# # 计算MSE
# def cost(xs,ys):
#     cost=0
#     for x,y in zip(xs,ys):
#         y_pre=forward(x)
#         cost+=(y_pre-y)**2
#     return cost/len(xs)
#
# # 计算梯度cost(w)对于w的导数
# def gradient(xs,ys):
#     grad=0
#     for x,y in zip(xs,ys):
#         grad+=2*x*(x*w-y)
#     return grad/len(xs)
#
# epoch_list=[]
# cost_list=[]
# print('predict(before training)',4,forward(4))
#
# for epoch in range(100):
#     cost_val=cost(x_data,y_data)
#     grad_val=gradient(x_data,y_data)
#     w-=0.01*grad_val
#     print('epoch',epoch,'w=',w,'loss=',cost_val)
#     epoch_list.append(epoch)
#     cost_list.append(cost_val)
#
# print('predict(after training)',4,forward(4))
# plt.plot(epoch_list,cost_list)
# plt.ylabel('cost')
# plt.xlabel('epoch')
# plt.show()


#随机梯度下降算法

def loss(x,y):
    y_pred=forward(x)
    return (y-y_pred)**2

def gradient(x, y):
    return 2*x*(x*w-y)

epoch_list=[]
loss_list=[]
print('predict(before training)',4,forward(4))
for epoch in range(100):
    for x,y in zip(x_data,y_data):
        grad=gradient(x,y)
        w=w-0.01*grad
        print('\tgrad:',x,y,grad)
    l=loss(x,y)
    print('progress:',epoch,'w=',w,'loss=',l)
    epoch_list.append(epoch)
    loss_list.append(l)

print('predict (after trainng)',4,forward(4))
plt.plot(epoch_list,loss_list)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()

你可能感兴趣的:(deep,learn,算法,深度学习,python)