caffe中的lr_policy

这几天在看caffe的小实验时,发现solver文件中都出现了lr_policy(学习率下降策略),但由于没有明确说明,故一直不太明白他们的下降原理。网上搜索一番之后,发现对于这个的东西的介绍就在/caffe-master/src/caffe/proto/caffe.proto文件中有定义,现将其摘抄如下:

// The learning rate decay policy. The currently implemented learning rate
// policies are as follows:
//    - fixed: always return base_lr.
//    - step: return base_lr * gamma ^ (floor(iter / step))
//    - exp: return base_lr * gamma ^ iter
//    - inv: return base_lr * (1 + gamma * iter) ^ (- power)
//    - multistep: similar to step but it allows non uniform steps defined by
//      stepvalue
//    - poly: the effective learning rate follows a polynomial decay, to be
//      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)
//    - sigmoid: the effective learning rate follows a sigmod decay
//      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
//
// where base_lr, max_iter, gamma, step, stepvalue and power are defined
// in the solver parameter protocol buffer, and iter is the current iteration.



另外,关于这个的介绍在Stack overflow上有一个更简洁的图像化描述:

http://stackoverflow.com/questions/30033096/what-is-lr-policy-in-caffe

你可能感兴趣的:(caffe实践)