吴恩达机器学习——我的错题集

第二周

Which of the following are reasons for using feature scaling?

It speeds up gradient descent by making it require fewer iterations to get to a good solution

.It speeds up gradient descent by making each iteration of gradient descent less expensive to compute


第三周

2. Suppose you have the following training set, and fit a logistic regression classifier 

hθ(x)=g(θ0+θ1x1+θ2x2).

Which of the following are true? Check all that apply.

Adding polynomial features (e.g., instead using hθ(x)=g(θ0+θ1x1+θ2x2+θ3x21+θ4x1x2+θ5x22) ) could increase how well we can fit the training data.

At the optimal value of θ (e.g., found by fminunc), we will have J(θ)0.

Adding polynomial features (e.g., instead using hθ(x)=g(θ0+θ1x1+θ2x2+θ3x21+θ4x1x2+θ5x22) ) would increase J(θ)because we are now summing over more terms.

If we train gradient descent for enough iterations, for some examples 

x(i) in the training set it is possible to obtain hθ(x(i))>1.

选AB


3.For logistic regression, the gradient is given by θjJ(θ)=mi=1(hθ(x(i))y(i))x(i)j. Which of these is a correct gradient descent update for logistic regression with a learning rate of α? Check all that apply.

θ:=θα1mmi=1(hθ(x(i))y(i))x(i).

θj:=θjα1mmi=1(θTxy(i))x(i)j (simultaneously update for all j).

θ:=θα1mmi=1(11+eθTx(i)y(i))x(i).

θ:=θα1mmi=1(θTxy(i))x(i).

选AC


1.You are training a classification model with logistic regression. Which of the following statements are true? Check all that apply.

Introducing regularization to the model always results in equal or better performance on examples not in the training set.

Introducing regularization to the model always results in equal or better performance on the training set.

Adding a new feature to the model always results in equal or better performance on the training set.

Adding many new features to the model helps prevent overfitting on the training set.

选C



你可能感兴趣的:(机器学习)