随手写的numpy实现一元线性回归(拟合三次函数)

import numpy as np
import matplotlib.pyplot as plt

learning_rate=1e-1    # 多次调整
epochs=1000
# input_features=1
# input_size=1000(and sum)
# output_features=1
# output_size=1000(and sum)

w=np.ones((1000,))
b=np.ones((1000,))
x=np.random.randn(1000,)
y=np.array([xi**3 for xi in x])   
print(x.shape,y.shape) 
plt.scatter(x,y)
plt.show()
(1000,) (1000,)

随手写的numpy实现一元线性回归(拟合三次函数)_第1张图片

l o s s = 1 n ( w x + b − y ) 2 δ l o s s δ w = 2 x n [ w x + ( b − y ) ] δ l o s s δ b = 2 n [ w x + b − y ] loss=\frac{1}{n}(wx+b-y)^2\\\frac{\delta loss}{\delta w}=\frac{2x}{n}[wx+(b-y)]\\\frac{\delta loss}{\delta b}=\frac{2}{n}[wx+b-y] \\ loss=n1(wx+by)2δwδloss=n2x[wx+(by)]δbδloss=n2[wx+by]

def getloss(pred,label):
    """
    pred:prediction array whose shape is (n,)
    label:label array whose shape is (n,)
    """
    # using MAE loss function
    n=len(pred)
    loss=np.sum((pred-label))/n
    return loss

def gradient_decent(init_weight,init_bias,x_train,y_train,epochs,lr):
    loss=0.
    pred=0.
    grad_b=0.
    grad_w=0.
    w=init_weight
    b=init_bias
    n=len(x_train)
    loss_list=[]
    for epoch in range(epochs):
        if (epoch+1)%50==0:
            print("Epoch {}/{}:".format(epoch+1,epochs))
        # 前向传播  
        pred=w*x_train+b
        loss=getloss(pred,y_train)
        loss_list.append(loss)
        # 后向传播
        grad_w=np.sum((pred-y_train)*(2*x_train)/n)
        grad_b=np.sum((pred-y_train)*(2/n))
        w=w-learning_rate*grad_w
        b=b-learning_rate*grad_b
        if (epoch+1)%50==0:
            print("Loss:{}".format(loss)) 
    return w,b,loss_list
    
w,b,loss=gradient_decent(w,b,x,y,epochs,learning_rate)
loss=np.array(loss)
Epochs=np.array(range(1,epochs+1))
plot_x=np.linspace(-3,3,1000)
prediction=w*plot_x+b
print(plot_x.shape,prediction.shape)
plt.plot(plot_x,prediction,c='r')
plt.scatter(x,y)
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
plt.plot(Epochs,loss)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.show()
Epoch 50/1000:
Loss:0.017706093923100227
Epoch 100/1000:
Loss:0.016060874992968197
Epoch 150/1000:
Loss:0.014568561314171443
Epoch 200/1000:
Loss:0.013214938732651933
Epoch 250/1000:
Loss:0.011987114760282535
Epoch 300/1000:
Loss:0.010873395653728185
Epoch 350/1000:
Loss:0.009863174928349911
Epoch 400/1000:
Loss:0.008946832243113071
Epoch 450/1000:
Loss:0.008115641691495604
Epoch 500/1000:
Loss:0.0073616886232058506
Epoch 550/1000:
Loss:0.0066777942029764914
Epoch 600/1000:
Loss:0.0060574469865672075
Epoch 650/1000:
Loss:0.005494740861105241
Epoch 700/1000:
Loss:0.004984318757650579
Epoch 750/1000:
Loss:0.004521321598973595
Epoch 800/1000:
Loss:0.0041013419955055926
Epoch 850/1000:
Loss:0.003720382247745059
Epoch 900/1000:
Loss:0.0033748162545040243
Epoch 950/1000:
Loss:0.0030613549636569408
Epoch 1000/1000:
Loss:0.002777015035862922
(1000,) (1000,)

随手写的numpy实现一元线性回归(拟合三次函数)_第2张图片

随手写的numpy实现一元线性回归(拟合三次函数)_第3张图片

你可能感兴趣的:(机器学习,深度学习,python,机器学习)