【机器学习】-Week3 2. Hypothesis Representation

Hypothesis Representation

We could approach the classification problem ignoring the fact that y is discrete-valued, and use our old linear regression algorithm to try to predict y given x. However, it is easy to construct examples where this method performs very poorly. Intuitively, it also doesn’t make sense for hθ​(x) to take values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}. To fix this, let’s change the form for our hypotheses 

. This is accomplished by plugging θ^Tx into the Logistic Function.

Our new form uses the "Sigmoid Function," also called the "Logistic Function":

【机器学习】-Week3 2. Hypothesis Representation_第1张图片

The following image shows us what the sigmoid function looks like:


【机器学习】-Week3 2. Hypothesis Representation_第2张图片

The function g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.

hθ​(x) will give us the probability that our output is 1. For example,

gives us a probability of 70% that our output is 1. Our probability that our prediction is 0 is just the complement of our probability that it is 1 (e.g. if probability that it is 1 is 70%, then the probability that it is 0 is 30%).


【机器学习】-Week3 2. Hypothesis Representation_第3张图片
【机器学习】-Week3 2. Hypothesis Representation_第4张图片
【机器学习】-Week3 2. Hypothesis Representation_第5张图片


【机器学习】-Week3 2. Hypothesis Representation_第6张图片

来源:coursera 斯坦福 吴恩达 机器学习

你可能感兴趣的:(【机器学习】-Week3 2. Hypothesis Representation)