python学习记录2——利用matplotlib动态显示梯度下降法中的参数

python学习记录2——利用matplotlib动态显示梯度下降法中的参数

主要参考博客
利用matplotlib绘制多个实时刷新的动态图表
https://blog.csdn.net/u013950379/article/details/87936999
【python】matplotlib动态显示
https://blog.csdn.net/zyxhangiian123456789/article/details/89159530
深入浅出–梯度下降法及其实现
https://www.jianshu.com/p/c7e642877b0e
深度解读最流行的优化算法:梯度下降
https://www.cnblogs.com/shixiangwan/p/7532858.html

import numpy as np
import matplotlib.pyplot as plt

# Size of the points dataset.
m = 20

# Points x-coordinate and dummy value (x0, x1).
X0 = np.ones((m, 1))
X1 = np.arange(1, m+1).reshape(m, 1)
X = np.hstack((X0, X1))

# Points y-coordinate
y = np.array([
    3, 4, 5, 5, 2, 4, 7, 8, 11, 8, 12,
    11, 13, 13, 16, 17, 18, 17, 19, 21
]).reshape(m, 1)

# The Learning Rate alpha.
alpha = 0.01314

def error_function(theta, X, y):
    '''Error function J definition.'''
    diff = np.dot(X, theta) - y
    return (1./2*m) * np.dot(np.transpose(diff), diff)

def gradient_function(theta, X, y):
    '''Gradient of the function J definition.'''
    diff = np.dot(X, theta) - y
    return (1./m) * np.dot(np.transpose(X), diff)

def gradient_descent(X, y, alpha):
    '''Perform gradient descent.'''
    theta = np.array([1, 1]).reshape(2, 1)
    plt.ion()
    gradient = gradient_function(theta, X, y)
    num = 0
    while not np.all(np.absolute(gradient) <= 1e-5):
        num += 1
        theta = theta - alpha * gradient
        gradient = gradient_function(theta, X, y)
        plt.clf()
        plt.suptitle(str(num) + "--" + str(theta), fontsize=9)
        plt.plot(X1, y, "r*")
        plt.plot([0, 20], [0, 20] * theta[1] + theta[0])
        plt.ylim((0, 20))
        plt.pause(0.4)
    plt.show()
    plt.ioff()
    return theta

optimal = gradient_descent(X, y, alpha)
print('optimal:', optimal)
print('error function:', error_function(optimal, X, y)[0,0])

你可能感兴趣的:(python编程)