吴恩达机器学习笔记(二)--单变量线性回归

吴恩达机器学习笔记(二)–单变量线性回归

学习基于:吴恩达机器学习.

1. Cost Function

We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference of all the results of the hypothesis with inputs from x’s and the actual output y’s.

items value
Hypothesis h θ ( x ) = θ 0 + θ 1 x h_{\theta}(x) = \theta_{0} + \theta_{1}x hθ(x)=θ0+θ1x
Parameters θ 0 , θ 1 \theta_{0}, \theta_{1} θ0,θ1
Cost Function J ( θ 0 , θ 1 ) = 1 2 m ∑ m i = 1 ( h θ ( x i ) − y i ) 2 J(\theta_{0}, \theta_{1}) = \frac{1}{2m}\sum_{m}^{i=1}(h_{\theta}(x^{i})-y^{i})^{2} J(θ0,θ1)=2m1mi=1(hθ(xi)yi)2
Goal min ⁡ J ( θ 0 , θ 1 ) \min J(\theta_{0}, \theta_{1}) minJ(θ0,θ1)

annotation: This chart considers Two-parameters condition

In a example where we come to the problem of house price:
吴恩达机器学习笔记(二)--单变量线性回归_第1张图片

  • If we set the parameter θ 0 \theta_{0} θ0 to 0, our objective is to get the best possible line in the plot which makes the cost function close to zero as well as possible. We can change the slope by θ 1 \theta_{1} θ1:
    吴恩达机器学习笔记(二)--单变量线性回归_第2张图片
  • As we consider both two parameters, the plot will look like a bowl:
    吴恩达机器学习笔记(二)--单变量线性回归_第3张图片
    That is a 3D-plot, we can also draw a contour plot:
    吴恩达机器学习笔记(二)--单变量线性回归_第4张图片

A contour plot is a graph that contains many contour lines. A contour line of a two variable function has a constant value at all points of the same line. An example of such a graph is the one to the right below.

2. Gradient Descent

Our hypothesis function helps us have a way of measuring how well it fits into the data. While gradient descent helps us estimate the parameters in the hypothesis function.
吴恩达机器学习笔记(二)--单变量线性回归_第5张图片

  • We will know that we have succeeded when our cost function is at the very bottom of the pits in our graph, when its value is the minimum.
  • The way we do this is by taking the derivative (the tangential line to a function) of our cost function. The slope of the tangent is the derivative at that point and it will give us a direction to move towards. We make steps down the cost function in the direction with the steepest descent.
  • The gradient descent algorithm is:
    • θ j : = θ j − α ∂ ∂ θ j J ( θ 0 , θ 1 ) . \theta_{j} := \theta_{j} - \alpha\frac{\partial}{\partial\theta_{j}}J(\theta_{0}, \theta_{1}). θj:=θjαθjJ(θ0,θ1).
      • Repeat until convergence:
    • Correct simultaneous update:
Correct simultaneous update:
  1. t e m p 0 : = θ 0 − α ∂ ∂ θ 0 J ( θ 0 , θ 1 ) . temp0 := \theta_{0} - \alpha\frac{\partial}{\partial\theta_{0}}J(\theta_{0}, \theta_{1}). temp0:=θ0αθ0J(θ0,θ1).
  1. t e m p 1 : = θ 1 − α ∂ ∂ θ 1 J ( θ 0 , θ 1 ) . temp1 := \theta_{1} - \alpha\frac{\partial}{\partial\theta_{1}}J(\theta_{0}, \theta_{1}). temp1:=θ1αθ1J(θ0,θ1).
  1. θ 0 : = t e m p 0 \theta_{0} := temp0 θ0:=temp0
  1. θ 1 : = t e m p 1 \theta_{1} := temp1 θ1:=temp1
Incorrect simultaneous update:
  1. t e m p 0 : = θ 0 − α ∂ ∂ θ 0 J ( θ 0 , θ 1 ) . temp0 := \theta_{0} - \alpha\frac{\partial}{\partial\theta_{0}}J(\theta_{0}, \theta_{1}). temp0:=θ0αθ0J(θ0,θ1).
  1. θ 0 : = t e m p 0 \theta_{0} := temp0 θ0:=temp0
  1. t e m p 1 : = θ 1 − α ∂ ∂ θ 1 J ( θ 0 , θ 1 ) . temp1 := \theta_{1} - \alpha\frac{\partial}{\partial\theta_{1}}J(\theta_{0}, \theta_{1}). temp1:=θ1αθ1J(θ0,θ1).
  1. θ 1 : = t e m p 1 \theta_{1} := temp1 θ1:=temp1

3. Gradient Descent For Linear Regression

When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived.

Gradient Descent algorithm Liner Regression model
repeat until convergence {
θ j : = θ j − α ∂ ∂ θ j J ( θ 0 , θ 1 ) \theta_{j} := \theta_{j} - \alpha\frac{\partial}{\partial\theta_{j}}J(\theta_{0}, \theta_{1}) θj:=θjαθjJ(θ0,θ1)
( for j = 1 and j = 0 )
}
h θ ( x ) = θ 0 + θ 1 x h_{\theta}(x) = \theta_{0} + \theta_{1}x hθ(x)=θ0+θ1x

J ( θ 0 , θ 1 ) = 1 2 m ∑ m i = 1 ( h θ ( x i ) − y i ) 2 J(\theta_{0}, \theta_{1}) = \frac{1}{2m}\sum_{m}^{i=1}(h_{\theta}(x^{i})-y^{i})^{2} J(θ0,θ1)=2m1mi=1(hθ(xi)yi)2

The point of all this is that if we start with a guess for our hypothesis and then repeatedly apply these gradient descent equations, our hypothesis will become more and more accurate. So, this is simply gradient descent on the original cost function J. This method looks at every example in the entire training set on every step, and is called batch gradient descent.吴恩达机器学习笔记(二)--单变量线性回归_第6张图片

你可能感兴趣的:(系统学习)