吴恩达机器学习课程:Gradient Descent For Linear Regression

Gradient Descent For Linear Regression

Which of the following are true statements? Select all that apply.

  • To make gradient descent converge, we must slowly decrease α over time.
    错误。一般来说越接近局部最低时,曲线的导数越小,梯度下降的幅度就越小(即梯度下降会自动选择较小的幅度)。所以不一定要动态调整α(学习率)的值。
  • Gradient descent is guaranteed to find the global minimum for any function J(θ0, θ1)
    错误:对一个代价函数J(θ0, θ1),若它拥有处在局部最优解的初始值θ0, θ1,则在这一点的导数为0,此时采取梯度下降的算法将无法得到全局最优解。
  • Gradient descent can converge even if α is kept fixed. (But α cannot be too large, or else it may fail to converge.)
    正确。原因同第一项。
  • For the specific choice of cost function J(θ0, θ1)used in linear regression, there are no local optima (other than the global optimum).
    正确。对一个特定的初始值,不能得到代价函数的局部最优解,持续尝试不同的参数组合才能得到局部最优解(或全局最优解),尝试完所有参数组合后才能确定全局最优解。

你可能感兴趣的:(机器学习,机器学习,人工智能,深度学习)