How to make a Neural Network converg(as fast as possible)

(1)增大batch size
这在gpu/cpu 存储容量有限的情况下可以增大iter_size实现
(2)人工检测learning rate policy
when error plateau, /=10
(3)Initialization
for ReLU
每一层应该服从正太分布(0,2/nl)nl=k * k * c
Delving Deep into Rectifiers:Surpassing Human-Level Performance on ImageNet Classification
(4)Gradient Descend method
Now SGD
(5)Data Pre-Processing
PCA 、Whiting
Subtract the mean image (e.g. AlexNet)(mean image = [32,32,3] array)
Subtract per-channel mean (e.g. VGGNet)(mean along each channel = 3 numbers)
(6)BN
already Batch Normal

你可能感兴趣的:(How to make a Neural Network converg(as fast as possible))