Stanford 机器学习 Week6 作业:Regularized Linear Regression and Bias v.s. Variance

linearRegCostfunction

m = length(y); 
J = 0;
grad = zeros(size(theta));

J = 1.0 / 2 / m * ( sum( (X * theta - y) .^ 2) + lambda * sum(theta(2:end) .^2) );

grad = 1 / m * ((X * theta - y)' * X)';
grad(2:end) = grad(2:end) + theta(2:end) * lambda / m ;
m = size(X, 1);
for i  = 1:m
    theta = trainLinearReg(X(1:i,:),y(1:i),lambda);
    error_train(i) = linearRegCostFunction(X(1:i,:),y(1:i),theta,0);
    error_val(i) = linearRegCostFunction(Xval,yval,theta,0);
end

validationCurve

lambda_vec = [0 0.001 0.003 0.01 0.03 0.1 0.3 1 3 10]';

error_train = zeros(length(lambda_vec), 1);
error_val = zeros(length(lambda_vec), 1);

m = length(lambda_vec);
for i = 1:m
    theta = trainLinearReg(X, y, lambda_vec(i));
    error_train(i) = linearRegCostFunction(X, y, theta, 0);
    error_val(i) = linearRegCostFunction(Xval, yval, theta, 0);
end

注意算error的时候都是不带regularization的

你可能感兴趣的:(机器学习,octave)