Notes for Deep Learning Lessons of Pro. Hung-yi Lee (3)

To be honest, I did not fully understand the process of developing perceptron to nerual network. Today, Pro. Lee help me solve this problem. Well, feel good. Maybe I have not gotten that point, haha.

As we all know, the perceptron model is a linear model. For the following task in which we want to divide these points into two groups (the red go to one group and the blue go to the other), we need to make the output of Class 1 larger than 0.5 and the output of Class 2 smaller than 0.5. However, we can not finish this task by using a linear.
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (3)_第1张图片
How can we finish this task? We can use feature transformation to do that. In other words, using a specific way to change the value of ( x 1 , x 2 ) (x_1,x_2) (x1,x2) to ( x 1 ′ , x 2 ′ ) (x_1^{'},x_2^{'}) (x1,x2) with the aim of making them seperable by a linear model.

Notes for Deep Learning Lessons of Pro. Hung-yi Lee (3)_第2张图片
But finding a suitbale transformation is always difficult. We can use neural network to help us finish it. For example, taking the following figure as inllustration, we use the upper perceptron to transfer x 1 x_1 x1 to x 1 ′ x_1^{'} x1 and the bottom perceptron to transfer x 2 x_2 x2 to x 2 ′ x_2^{'} x2, and then these points become seperable by perceptron. All we have to do is to train the parameters to make the transformation suitable for our current dataset. And this task can be done by sorts of optimization algorithm.
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (3)_第3张图片
Notes for Deep Learning Lessons of Pro. Hung-yi Lee (3)_第4张图片

你可能感兴趣的:(python,深度学习)