18梯度下降法

首先确定拟合的函数:y=ax1+bx2+c
然后确定损失函数,对损失函数求导。计算下降方向可以用一个batch_size的平均值也可以用一个样本。只不过一个样本会导致下降方向比较随机。然后对一个batch_size进行循环,叫做优化一步。

import numpy as np
#y=2 * (x1) + (x2) + 3 
rate = 0.001
x_train = np.array([    [1, 2],    [2, 1],    [2, 3],    [3, 5],    [1, 3],    [4, 2],    [7, 3],    [4, 5],    [11, 3],    [8, 7]    ])
y_train = np.array([7, 8, 10, 14, 8, 13, 20, 16, 28, 26])
x_test  = np.array([    [1, 4],    [2, 2],    [2, 5],    [5, 3],    [1, 5],    [4, 1]    ])

a,b,c = 0,0,0

def h(x):
    '''
    定义h(x)
    '''
    return x[:,0]*a+x[:,1]*b+c

m=x_train.shape[0]

for i in range(100000):
    '''
    循环
    '''
    a = a + np.sum(rate*(y_train-h(x_train))*(x_train[:,0])/m)
    b = b + np.sum(rate*(y_train-h(x_train))*(x_train[:,1])/m)
    c = c + np.sum(rate*( y_train-h(x_train))/m)

print(a)
print(b)
print(c)

  • 参考资料:
    1. https://blog.csdn.net/xiazdong/article/details/7950084

你可能感兴趣的:(18梯度下降法)