2016年8月2日(week4神经网络2)

Next, we need to understand it by using Math:

2016年8月2日(week4神经网络2)_第1张图片
Neuron Network model1.png

Here is a picture of a typical Neuron Network. There are 4 layers: Layer 1 is called input layer; Layer2&3 are called hidden layers which compute intermidate features that lead us to a more meaningful(abstract?) result; Layers 4 is called output layer.

2016年8月2日(week4神经网络2)_第2张图片
answer1.png

We want to compute values in Layer2. How do we do that? Suppose we are given a 3x4 matrix Big-theta(1) alreday. Then a(2) is just g(theta*a(1)), done!

Next,

2016年8月2日(week4神经网络2)_第3张图片
Neuron Network example of logic &&.png

We can do even more complicated things by Neuron Network, such as logic arithmetics.
By taking a advantage sigmoid(x) function, we can approximate every logic arithmetic units.
I believe the picture above explains everything!
Finally,

2016年8月2日(week4神经网络2)_第4张图片
Neuron Network Multicalss Classification.png

Packing up all cool thing we've learned so far, it's time to realize the ture power of multiclassification. How do we do that?
We simply make them distinct vectors like [1,0,0,0],[0,1,0,0],[0,0,1,0],etc

You should look for my complete code of this section : https://github.com/yhyu13/Coursera-Machine-Learning-Andrew-Ng (file name: predict.m)

你可能感兴趣的:(2016年8月2日(week4神经网络2))