吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题

C2W1 Quiz - Practical aspects of deep learning

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第1张图片

Ans: C

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第2张图片

Ans: A (Come from the same distribution)

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第3张图片

Ans: C、D

 

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第4张图片Ans: A、C

Note: refer below diagram

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第5张图片

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第6张图片

Ans: A

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第7张图片

Ans: A

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第8张图片

Ans: D

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第9张图片

Ans: B、D

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第10张图片

Ans: B、E、G (Data augmentation, L2 regularization, Dropout)

吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题_第11张图片

Ans: B

 

  1. If you have 10,000,000 examples, how would you split the train/dev/test set?
    • 98% train . 1% dev . 1% test
  2. The dev and test set should:
    • Come from the same distribution
  3. If your Neural Network model seems to have high variance, what of the following would be promising things to try?
    • Add regularization
    • Get more training data
  4. You are working on an automated check-out kiosk for a supermarket, and are building a classifier for apples, bananas and oranges. Suppose your classifier obtains a training set error of 0.5%, and a dev set error of 7%. Which of the following are promising things to try to improve your classifier? (Check all that apply.)
    • Increase the regularization parameter lambda
    • Get more training data
  5. What is weight decay?
    • A regularization technique (such as L2 regularization) that results in gradient descent shrinking the weights on every iteration.
  6. What happens when you increase the regularization hyperparameter lambda?
    • Weights are pushed toward becoming smaller (closer to 0)
  7. With the inverted dropout technique, at test time:
    • You do not apply dropout (do not randomly eliminate units) and do not keep the 1/keep_prob factor in the calculations used in training
  8. Increasing the parameter keep_prob from (say) 0.5 to 0.6 will likely cause the following: (Check the two that apply)
    • Reducing the regularization effect
    • Causing the neural network to end up with a lower training set error
  9. Which of these techniques are useful for reducing variance (reducing overfitting)? (Check all that apply.)
    • Dropout
    • L2 regularization
    • Data augmentation
  10. Why do we normalize the inputs x?
    • It makes the cost function faster to optimize

你可能感兴趣的:(机器学习,深度学习(ML/DL))