梯度下降Intuition

Gradient Descent Intuition

In this video we explored the scenario where we used one parameterθ1and plotted its cost function to implement a gradient descent. Our formula for a single parameter was :

Repeat until convergence:

θ1:=θ1−αddθ1J(θ1)

Regardless of the slope's sign forddθ1J(θ1),θ1eventually converges to its minimum value. The following graph shows that when the slope is negative, the value ofθ1increases and when it is positive, the value ofθ1decreases.



On a side note, we should adjust our parameterαto ensure that the gradient descent algorithm converges in a reasonable time. Failure to converge or too much time to obtain the minimum value imply that our step size is wrong.



How does gradient descent converge with a fixed step sizeα?

The intuition behind the convergence is thatddθ1J(θ1)approaches 0 as we approach the bottom of our convex function. At the minimum, the derivative will always be 0 and thus we get:

θ1:=θ1−α∗0


梯度下降Intuition_第1张图片

你可能感兴趣的:(梯度下降Intuition)