Neural Networks and Deep Learning 习题1

Sigmoid neurons simulating perceptrons, part I

Suppose we take all the weights and biases in a network of perceptrons, and multiply them by a positive constant, c>0c>0. Show that the behaviour of the network doesn't change. 对于perceptron rule :

图片.png

两边同时乘上c,等式不变


图片.png

Sigmoid neurons simulating perceptrons, part II

Suppose we have the same setup as the last problem - a network of perceptrons. Suppose also that the overall input to the network of perceptrons has been chosen. We won't need the actual input value, we just need the input to have been fixed. Suppose the weights and biases are such that w⋅x+b≠0 for the input x to any particular perceptron in the network. Now replace all the perceptrons in the network by sigmoid neurons, and multiply the weights and biases by a positive constant c>0. Show that in the limit as c→∞the behaviour of this network of sigmoid neurons is exactly the same as the network of perceptrons. How can this fail when w⋅x+b=0 for one of the perceptrons?

对于sigmoid 函数来说,(wx + b)同时乘上c,不影响wx + b > 0 或 wx + b < 0的结果,因此对于σ(wx + b) > 0.5 或 σ(wx + b) < 0.5 的判定没有影响。但是当wx + b = 0时,σ(wx + b) = 0.5,判断不了结果的类别,因此不能进行二元分类。

Four output neural exercise

There is a way of determining the bitwise representation of a digit by adding an extra layer to the three-layer network above. The extra layer converts the output from the previous la!yer into a binary representation, as illustrated in the figure below. Find a set of weights and biases for the new output layer. Assume that the first 3 layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least 0.99, and incorrect outputs have activation less than 0.01.


图片.png

二进制数字:

  • 0 :0000
  • 1 :0001
  • 2 :0010
  • 3 :0011
  • 4 :0100
  • 5 :0101
  • 6 :0110
  • 7 :0111
  • 8 :1000
  • 9 :1001
    output layer从上至下依次表示2^0, 2^1, 2^2, 23,那么1,3,5,7,9会使20的output neural输出1,而2,3,6,7会使2^1的output neural输出1,依次类推。 那么对于第一个output neural,我们可以令向量w=-1,1,-1,1,-1,1,-1,1,-1,1,b = 0则对于这个neural来说,wx + b = -1x0 + x1 - x2 + x3 - x4 + x5 - x6 + x7 - x8 + x9 > 0 当且仅当输入中x1,x3,x5,x7,x9的值的和大于其余的和(正常情况下只有一项为0.99,其余都是0.01)。比如x5=0.99,其余是0.01,那么wx + b = 0.98 > 0 ,所以σ(wx + b)> 0.5,2^0位输出为1。 同理,得到4个ouput neural的w:
    1. -1, 1,-1, 1,-1, 1,-1, 1,-1, 1
    2. -1,-1, 1, 1,-1,-1, 1, 1,-1,-1
    3. -1,-1,-1,-1, 1, 1, 1, 1,-1,-1
    4. -1,-1,-1,-1,-1,-1,-1,-1, 1, 1
      注意前面给出的条件是:正确的结果一定大于0.99,错误的一定小于0.01,这样就避免了比如数字0为0.8最大,其余的是0.7比0.8小,但是其他的和加起来大于0.8,输出错误的情况。

你可能感兴趣的:(Neural Networks and Deep Learning 习题1)