深度学习中的fan_in与fan_out

Understanding the difficulty of training deep feedforward neural network 中,fan_in指第i层神经元个数,fan_out指第i+1层神经元个数。通常卷积网络不是全连接的,fan_in与fan_out的计算方式有所不同。
pytorch中:
f a n i n = c h a n n e l s i n × k e r n e r w i d t h × k e r n e r h e i g h t fan_{in}=channels_{in}\times kerner_{width} \times kerner_{height} fanin=channelsin×kernerwidth×kernerheight

f a n o u t = c h a n n e l s o u t × k e r n e r w i d t h × k e r n e r h e i g h t fan_{out}=channels_{out}\times kerner_{width} \times kerner_{height} fanout=channelsout×kernerwidth×kernerheight

根据对http://deeplearning.net/tutorial/lenet.html中的理解,以及https://stackoverflow.com/questions/42670274/how-to-calculate-fan-in-and-fan-out-in-xavier-initialization-for-neural-networks中的描述,另外一种更加精确的描述为:
f a n i n = c h a n n e l s i n × r e c e p t i v e f i e l d h e i g h t × r e c e p t i v e f i e l d w i d t h fan_in = channels_{in}\times receptivefield_{height}\times receptivefield_{width} fanin=channelsin×receptivefieldheight×receptivefieldwidth

f a n i n = c h a n n e l s i n × r e c e p t i v e f i e l d h e i g h t × r e c e p t i v e f i e l d w i d t h m a x p o o l a r e a fan_in = \frac{channels_{in}\times receptivefield_{height}\times receptivefield_{width}}{maxpool_{area}} fanin=maxpoolareachannelsin×receptivefieldheight×receptivefieldwidth

按照第二种说法,空洞卷积(dilated convolution)的感受野要比普通卷积大, r e c e p t i v e f i e l d receptivefield receptivefield k e r n e l kernel kernel的大小不同,计算结果也不同。

你可能感兴趣的:(deep,learning)