吴恩达 Machine learning week3 quiz2 --Regularization答案解析

吴恩达 Machine learning week3 quiz2 --Regularization答案解析_第1张图片
题1:
You are training a classification model with logistic regression. Which of the following statements are true? Check all that apply.

  • Introducing regularization to the model always results in equal or better performance on examples not in the training set.
  • Introducing regularization to the model always results in equal or
    better performance on examples not in the training set.
  • Adding many new features to the modelmakes it likely 同 overfit ont
    the training set. Introducing regularization to the model always
    results in equal or better performance on the training set.
  • Adding a new feature to the model always results in equal or better
    performance on eamples not in the training set.
    解析:
    答案: 2
    -选项1:more features能够更好的fit 训练集,即新加的feature会提高training set的拟合度,而不是example拟合度。
    -选项2: more features能够更好的fit 训练集,同时也容易导致overfit,正确。
    -选项3: 同1,将正则化方法加入模型并不是每次都能取得好的效果,如果取得太大的化就会导致欠 拟合. 这样不论对traing set 还是 examples都不好. 不正确 。
    -选项4: 新加的feature会提高train set的拟合度,而不是example拟合度. 正确 。
    吴恩达 Machine learning week3 quiz2 --Regularization答案解析_第2张图片
    解析:加入λ会让θ变小,当λ非常非常大时,θ1θ2…θn≈0。所以选择θ值较小的选项。

吴恩达 Machine learning week3 quiz2 --Regularization答案解析_第3张图片
解析:
答案: 1
Which of the following statements about regularization are true? Check all that apply.

  • Using too large a value of λ can cause your hypothesis to dunderfit the
    data.
  • Using a very large value of λ cannot hurt the performance of your
    hypothesis; the only reason we do not set λ to be too large is to
    avoid numerical problems.
  • Because regularization causes J(θ) to no longer be convex,gradient descent may not always convege to the globle minimum(when λ>0,and when using an appropriate learing rate a).
  • Because logistic regression outputs values 0≤hθ(x)≤1, its range of
    output values can only be “shrunk” slightly by regularization anyway,
    so regularization is generally not helpful for it.
  • 选项1: λ太大导致underfit,当λ太大时θ1θ2…θn≈0.只有θ0起作用,拟合出来是一条直线. λ太小才会导致overfit. 正确 *
  • 选项2: 同1. 不正确
  • 选项3: 不正确
  • 选项4: “shrunk” slightly的是θ, regularization是想要解决overfit. 不正确

吴恩达 Machine learning week3 quiz2 --Regularization答案解析_第4张图片

吴恩达 Machine learning week3 quiz2 --Regularization答案解析_第5张图片

你可能感兴趣的:(machine,learning)