Tensorflow学习之路 —— 线性回归问题实战

线性回归问题实战

          • 实现步骤
          • Step1:Compute Loss
          • Step2:Compute Gradient and update
          • Step3:Set w = w′ and loop
          • 数据集
          • 完整代码
          • 运行结果

Linear Equation(线性方程):y = w * x + b

实现步骤

1:根据随机初始化的 w x b 和 y 来计算 loss
2:根据当前的 w x b 和 y 的值来计算梯度
3:更新梯度,循环将新的 w′ 和 b′ 复赋给 w 和 b ,最终得到一个最优的 w′ 和 b′ 作为方程最终的参数

Step1:Compute Loss

Tensorflow学习之路 —— 线性回归问题实战_第1张图片

def computer_error_for_line_given_points(b, w, points):
    totalError = 0
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        totalError += (y - (w * x + b)) ** 2
    return totalError / float(len(points))
Step2:Compute Gradient and update

Tensorflow学习之路 —— 线性回归问题实战_第2张图片

def step_gradient(b_current, w_current, points, learningRate):
    b_gradient = 0
    w_gradient = 0
    N = float(len(points))
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        # grad_b = 2(wx+b-y)
        b_gradient += (2 / N) * ((w_current * x + b_current) - y)
        # grad_w = 2(wx+b-y)*x
        w_gradient += (2 / N) * x * ((w_current * x + b_current) - y)
        # update b' w'
        new_b = b_current - (learningRate * b_gradient)
        new_w = w_current - (learningRate * w_gradient)
    return [new_b, new_w]
Step3:Set w = w′ and loop

Tensorflow学习之路 —— 线性回归问题实战_第3张图片

def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iteration):
    b = starting_b
    w = starting_w
    #update for serval times
    for i in range(num_iteration):
        b,w = step_gradient(b,w,np.array(points),learning_rate)
    return [b,w]
数据集

共100行的坐标数据

Tensorflow学习之路 —— 线性回归问题实战_第4张图片

完整代码
import numpy as np

# y = wx + b
def compute_error_for_line_given_points(b, w, points):
    totalError = 0
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        # computer mean-squared-error
        totalError += (y - (w * x + b)) ** 2
    # average loss for each point
    return totalError / float(len(points))


def step_gradient(b_current, w_current, points, learningRate):
    b_gradient = 0
    w_gradient = 0
    N = float(len(points))
    for i in range(0, len(points)):
        x = points[i, 0]
        y = points[i, 1]
        # grad_b = 2(wx+b-y)
        b_gradient += (2/N) * ((w_current * x + b_current) - y)
        # grad_w = 2(wx+b-y)*x
        w_gradient += (2/N) * x * ((w_current * x + b_current) - y)
    # update w'
    new_b = b_current - (learningRate * b_gradient)
    new_w = w_current - (learningRate * w_gradient)
    return [new_b, new_w]

def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
    b = starting_b
    w = starting_w
    # update for several times
    for i in range(num_iterations):
        b, w = step_gradient(b, w, np.array(points), learning_rate)
    return [b, w]

def run():
	
    points = np.genfromtxt("data.csv", delimiter=",")
    learning_rate = 0.0001
    initial_b = 0 # initial y-intercept guess
    initial_w = 0 # initial slope guess
    num_iterations = 1000
    print("Starting gradient descent at b = {0}, w = {1}, error = {2}"
          .format(initial_b, initial_w,
                  compute_error_for_line_given_points(initial_b, initial_w, points))
          )
    print("Running...")
    [b, w] = gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
    print("After {0} iterations b = {1}, w = {2}, error = {3}".
          format(num_iterations, b, w,
                 compute_error_for_line_given_points(b, w, points))
          )

if __name__ == '__main__':
    run()
运行结果

b 和 w 初始值为0的情况下error大约为5565.1,在经过1000次的训练后得到的最优 b 为0.089,w 约为1.48,error下降到了112.6

在这里插入图片描述

你可能感兴趣的:(Tensorflow学习之路 —— 线性回归问题实战)