最近用caffe做yolo,看到其中loss层中,和ground的匹配的anchor处的loss梯度计算和源码相反:
// https://github.com/pjreddie/darknet/blob/f6d861736038da22c9eb0739dca84003c5a5e275/src/yolo_layer.c#L93-L108
178 l.delta[obj_index] = 0 - l.output[obj_index];
183 l.delta[obj_index] = 1 - l.output[obj_index];
223 l.delta[obj_index] = 1 - l.output[obj_index];
而caffe中都是用output - 0/1作为梯度传入diff。这个区别一度让强迫症如我打算改掉代码(还好大哥及时拉住了我)。查了一些资料也终于明白了其中的道理,这里且做记录。
yolo用的是平方误差损失函数。具体操作上,yolo有三个金字塔层,每个金字塔的最后一层是卷积层,卷积层的输出经过sigmoid函数之后和ground truth做平方损失。即:
J ( w , b ) = ∑ ( σ ( c o n v ( x ) ) − g t ) 2 J(w,b)=\sum(\sigma(conv(x))-gt)^2 J(w,b)=∑(σ(conv(x))−gt)2
这个是不是有点眼熟?对咯,这就是逻辑回归的代价函数啊。所以我们按照逻辑回归的算法来推导梯度及权重更新。
首先,对于代价函数为 J = ∑ i N ( h w ( x i ) − y i ) 2 J=\sum_i^N(h_w(x_i)-y_i)^2 J=∑iN(hw(xi)−yi)2的逻辑回归问题,梯度下降过程表示为:
w : = w − l r ∂ J ( w , b ) ∂ w w:=w-lr\dfrac{\partial J(w,b)}{\partial w} w:=w−lr∂w∂J(w,b)
b : = b − l r ∂ J ( w , b ) ∂ b b:=b-lr\dfrac{\partial J(w,b)}{\partial b} b:=b−lr∂b∂J(w,b)
上式的梯度可以表示为
∂ J ( w , b ) ∂ w = − 1 N ∑ i N ( y i − o u t p u t ( x i ) ) x i \dfrac{\partial J(w,b)}{\partial w}=-\dfrac{1}{N}\sum_i^N(y_i-output(x_i))x_i ∂w∂J(w,b)=−N1i∑N(yi−output(xi))xi
∂ J ( w , b ) ∂ b = − 1 N ∑ i N ( y i − o u t p u t ( x i ) ) \dfrac{\partial J(w,b)}{\partial b}=-\dfrac{1}{N}\sum_i^N(y_i-output(x_i)) ∂b∂J(w,b)=−N1i∑N(yi−output(xi))
参考:https://blog.csdn.net/u014258807/article/details/80616647
首先我们知道caffe中blob由数据data和梯度diff两部分组成,而二者的更新是在
caffe的权重更新是通过net.cpp中的Net::Update():
// https://github.com/BVLC/caffe/blob/master/src/caffe/net.cpp
template <typename Dtype>
void Net<Dtype>::Update() {
for (int i = 0; i < learnable_params_.size(); ++i) {
learnable_params_[i]->Update();
}
}
而Update()函数的实现为:
// https://github.com/BVLC/caffe/blob/master/src/caffe/blob.cpp
template <typename Dtype>
void Blob<Dtype>::Update() {
// We will perform update based on where the data is located.
switch (data_->head()) {
case SyncedMemory::HEAD_AT_CPU:
// perform computation on CPU
caffe_axpy<Dtype>(count_, Dtype(-1),
static_cast<const Dtype*>(diff_->cpu_data()),
static_cast<Dtype*>(data_->mutable_cpu_data()));
break;
case SyncedMemory::HEAD_AT_GPU:
case SyncedMemory::SYNCED:
#ifndef CPU_ONLY
// perform computation on GPU
caffe_gpu_axpy<Dtype>(count_, Dtype(-1),
static_cast<const Dtype*>(diff_->gpu_data()),
static_cast<Dtype*>(data_->mutable_gpu_data()));
#else
NO_GPU;
#endif
break;
default:
LOG(FATAL) << "Syncedmem not initialized.";
}
}
其中caffe_axpy()函数的实现为:
// https://github.com/BVLC/caffe/blob/master/src/caffe/util/math_functions.cpp
template <>
void caffe_axpy<float>(const int N, const float alpha, const float* X,
float* Y) { cblas_saxpy(N, alpha, X, 1, Y, 1); }
template <>
void caffe_axpy<double>(const int N, const double alpha, const double* X,
double* Y) { cblas_daxpy(N, alpha, X, 1, Y, 1); }
其中cblas_saxpy()的用法:
https://developer.apple.com/documentation/accelerate/1513188-cblas_saxpy?language=objc
void cblas_saxpy(const int __N, const float __alpha, const float *__X, const int __incX, float *__Y, const int __incY);
On return, the contents of vector Y are replaced with the result. The value computed is (alpha * X[i]) + Y[i].
所以更新之后的权重应该是Dtype(-1) * diff_ + data_。
因为caffe按照Dtype(-1) * diff_ + data_更新data_,所以diff_应当传 ∂ J ( w , b ) ∂ σ = ( σ i − g t i ) \dfrac{\partial J(w,b)}{\partial \sigma}=(\sigma_i-gt_i) ∂σ∂J(w,b)=(σi−gti)(此处系数忽略)。那么问题来了,为什么darknet传递的是负梯度呢?其更新源码为:
// https://github.com/pjreddie/darknet/blob/f6d861736038da22c9eb0739dca84003c5a5e275/src/convolutional_layer.c
void update_convolutional_layer(convolutional_layer l, update_args a)
{
float learning_rate = a.learning_rate*l.learning_rate_scale;
float momentum = a.momentum;
float decay = a.decay;
int batch = a.batch;
axpy_cpu(l.n, learning_rate/batch, l.bias_updates, 1, l.biases, 1);
scal_cpu(l.n, momentum, l.bias_updates, 1);
if(l.scales){
axpy_cpu(l.n, learning_rate/batch, l.scale_updates, 1, l.scales, 1);
scal_cpu(l.n, momentum, l.scale_updates, 1);
}
axpy_cpu(l.nweights, -decay*batch, l.weights, 1, l.weight_updates, 1);
axpy_cpu(l.nweights, learning_rate/batch, l.weight_updates, 1, l.weights, 1);
scal_cpu(l.nweights, momentum, l.weight_updates, 1);
}
可知,其用的是 l r b a t c h \dfrac{lr}{batch} batchlr更新的权重,所以当然要传负梯度进去了。