BN

Why does batch normalization work?

(1) We know that normalizing input features can speed up learning, one intuition is that doing same thing for hidden layers should also work.

(2)solve the problem of covariance shift

Suppose you have trained your cat-recognizing network use black cat, but evaluate on colored cats, you will see data distribution changing(called covariance shift). Even there exist a true boundary separate cat and non-cat, you can’t expect learn that boundary only with black cat. So you may need to retrain the network.

For a neural network, suppose input distribution is constant, so output distribution of a certain hidden layer should have been constant. But as the weights of that layer and previous layers changing in the training phase, the output distribution will change, this cause covariance shift from the perspective of layer after it. Just like cat-recognizing network, the following need to re-train. To recover this problem, we use batch normal to force a zero-mean and one-variance distribution. It allow layer after it to learn independently from previous layers, and more concentrate on its own task, and so as to speed up the training process.

(3)Batch normal as regularization(slightly)

In batch normal, mean and variance is computed on mini-batch, which consist not too much samples. So the mean and variance contains noise. Just like dropout, it adds some noise to hidden layer’s activation(dropout randomly multiply activation by 0 or 1).

This is an extra and slight effect, don’t rely on it as a regularizer.

你可能感兴趣的:(AI,Computer,Vision,python,深度学习,开发语言)