Coursera-Machine Learning-ex2

Some Points:

  1. We can use regularization to prevent overfitting. note the index of  theta begin at 1 in the regularization formula.
  2. J(\Theta )=\frac{1}{m}\sum_{i=1}^{m} [-y^{(i)}log(h(x^{(i)}))-(1-y^{(i)})log(1-h(x^{(i)}))]+\frac{\lambda }{2m}\sum_{j=1}^{n} \Theta _{j} ^{2} 

The Results:

  • plotData.m
pos = find(y == 1);
neg = find(y == 0);

plot(X(pos,1), X(pos,2), 'kx', 'lineWidth', 2, 'MarkerSize', 7);
plot(X(neg,1), X(neg,2), 'ko', 'MarkerFaceColor', 'y', 'MarkerSize', 7);

  • sigmoid.m
function g = sigmoid(z)

g = zeros(size(z));

g = 1 ./ (1 + exp(-z));

end
  • costFunction.m
function [J, grad] = costFunction(theta, X, y)

m = length(y); % number of training examples

J = 0;
grad = zeros(size(theta));

pos = y' * log(sigmoid(X * theta));
neg = (1 - y)' * log(1 - sigmoid(X * theta));
J = -1 / m * (pos + neg);

% for i = 1:length(theta)
%     grad(i) = (sigmoid(X * theta) - y)' * X(:, i) / m; 
% end

grad = X' * (sigmoid(X * theta) - y) ./ m;

end
  • predict.m
function p = predict(theta, X)

m = size(X, 1); % Number of training examples

p = zeros(m, 1);

for i = 1:m
    if (sigmoid(X(i,:) * theta) >= 0.5)
        p(i) = 1;
    else
        p(i) = 0;
    end
end

end
  • costFunctionReg.m
function [J, grad] = costFunctionReg(theta, X, y, lambda)

m = length(y); % number of training examples

J = 0;
grad = zeros(size(theta));

pos = y' * log(sigmoid(X * theta));
neg = (1 - y)' * log(1 - sigmoid(X * theta));
reg = theta(2:length(theta))' * theta(2:length(theta)) * (lambda / (2 * m)); % attention j begin at 1
J = -1 / m * (pos + neg) + reg;

 % attention j begin at 1
reg = theta .* (lambda / m);
reg(1, 1) = 0;
grad = X' * (sigmoid(X * theta) - y) ./ m  + reg;

end

 

你可能感兴趣的:(Deep,Learning)