详见 CS231n课程笔记1:Introduction。
本文都是作者自己的思考,正确性未经过验证,欢迎指教。
Batchnorm的思想简单易懂,实现起来也很轻松,但是却具有很多优良的性质,具体请参考课程笔记。下图简要介绍了一下Batchnorm需要完成的工作以及优点(详情请见CS231n课程笔记5.3:Batch Normalization):
需要注意的有:
这里即实现上图所诉功能,需要注意的有:
注:broadcasting的部分请参考python、numpy、scipy、matplotlib的一些小技巧。
if mode == 'train':
mean = np.mean(x,axis = 0)
var = np.var(x,axis = 0)
running_mean = running_mean * momentum + (1-momentum) * mean
running_var = running_var * momentum + (1-momentum) * var
out_media = (x-mean)/np.sqrt(var + eps)
out = (out_media + beta) * gamma
cache = (out_media,x,mean,var,beta,gamma,eps)
elif mode == 'test':
out = (x-running_mean)/np.sqrt(running_var+eps)
out = (out + beta) * gamma
cache = (out,x,running_mean,running_var,beta,gamma,eps)
对前面所诉的前向传播过程做BP(详情参考CS231n课程笔记4.1:反向传播BP),值得注意的有:
dout_media = dout * gamma
dgamma = np.sum(dout * (out_media + beta),axis = 0)
dbeta = np.sum(dout * gamma,axis = 0)
dx = dout_media / np.sqrt(var + eps)
dmean = -np.sum(dout_media / np.sqrt(var+eps),axis = 0)
dstd = np.sum(-dout_media * (x - mean) / (var + eps),axis = 0)
dvar = 1./2./np.sqrt(var+eps) * dstd
dx_minus_mean_square = dvar / x.shape[0]
dx_minus_mean = 2 * (x-mean) * dx_minus_mean_square
dx += dx_minus_mean
dmean += np.sum(-dx_minus_mean,axis = 0)
dx += dmean / x.shape[0]
不带Batchnorm多层神经网络的实现参考CS231n作业笔记2.2:多层神经网络的实现。
初始化参数,注意beta以及gamma都需要初始化,而且对于x的每一维都存在相应独立的参数。
self.bn_params = []
if self.use_batchnorm:
self.bn_params = [{'mode': 'train'} for i in xrange(self.num_layers - 1)]
for i in xrange(self.num_layers-1):
self.params['beta'+str(i+1)] = np.zeros(hidden_dims[i])
self.params['gamma'+str(i+1)] = np.ones(hidden_dims[i])
计算scores,注意对于最后一层全连接的输出不做BN;以及running_mean以及running_var是内部变量,每次只在自己内部更新,不同层的mean与var无关。
cache = {}
hidden_value = None
hidden_value,cache['fc1'] = affine_forward(X,self.params['W1'],self.params['b1'])
if self.use_batchnorm:
hidden_value,cache['bn1'] = batchnorm_forward(hidden_value, self.params['gamma1'], self.params['beta1'], self.bn_params[0])
hidden_value,cache['relu1'] = relu_forward(hidden_value)
for index in range(2,self.num_layers):
hidden_value,cache['fc'+str(index)] = affine_forward(hidden_value,self.params['W'+str(index)],self.params['b'+str(index)])
if self.use_batchnorm:
hidden_value,cache['bn'+str(index)] = batchnorm_forward(hidden_value, self.params['gamma'+str(index)], self.params['beta'+str(index)], self.bn_params[index-1])
hidden_value,cache['relu'+str(index)] = relu_forward(hidden_value)
scores,cache['score'] = affine_forward(hidden_value,self.params['W'+str(self.num_layers)],self.params['b'+str(self.num_layers)])
计算gradient,注意本作业对于beta以及gamma不做正则化,但是keras等开源库提供了相应正则化的接口。
loss, grads = 0.0, {}
loss,dscores = softmax_loss(scores,y)
for index in range(1,self.num_layers+1):
loss += 0.5*self.reg*np.sum(self.params['W'+str(index)]**2)
dhidden_value,grads['W'+str(self.num_layers)],grads['b'+str(self.num_layers)] = affine_backward(dscores,cache['score'])
for index in range(self.num_layers-1,1,-1):
dhidden_value = relu_backward(dhidden_value,cache['relu'+str(index)])
if self.use_batchnorm:
dhidden_value, grads['gamma'+str(index)], grads['beta'+str(index)] = batchnorm_backward(dhidden_value, cache['bn'+str(index)])
dhidden_value,grads['W'+str(index)],grads['b'+str(index)] = affine_backward(dhidden_value,cache['fc'+str(index)])
dhidden_value = relu_backward(dhidden_value,cache['relu1'])
if self.use_batchnorm:
dhidden_value, grads['gamma1'], grads['beta1'] = batchnorm_backward(dhidden_value, cache['bn1'])
dhidden_value,grads['W1'],grads['b1'] = affine_backward(dhidden_value,cache['fc1'])
for index in range(1,self.num_layers+1):
grads['W'+str(index)] += self.reg * self.params['W'+str(index)]