caffe 训练笔记总结

1 序言

本文主要是自己平时训练参数调整的总结,后续也不断的完善,由于以前训练调参过程中,没有总结总是忘记的参数,这个也自己备忘,如有错误或者引用不当,欢迎指正。
Last modified date: 2019-03-01

2 优化器

caffe总共提供了六种优化方法:

  • Stochastic Gradient Descent (type: “SGD”),
  • AdaDelta (type: “AdaDelta”),
  • Adaptive Gradient (type: “AdaGrad”),
  • Adam (type: “Adam”),
  • Nesterov’s Accelerated Gradient (type: “Nesterov”) and
  • RMSprop (type: “RMSProp”)

2.1 SGD

注释的是可以省略的,有默认值, 但是如果要优化,还是要自己调整。

type: "SGD"   #default
#momentum: 0.9

2.2 AdaDelta


type: "AdaDelta"
#momentum: 0.95
#delta: 1e-6

2.3 AdaGrad

type: "AdaGrad"

2.4 Adam

type: "Adam"
#momentum: 0.9
#momentum2: 0.999
#delta: 1e-8

2.5 Nesterov

#momentum: 0.95
#type: "Nesterov"

2.6 RMSProp

type: "RMSProp"
#rms_decay: 0.98

2.7 经验总结

  • Adam通常会取得比较好的结果,同时收敛非常快相比SGD
  • L-BFGS适用于全batch做优化的情况
  • 有时候可以多种优化方法同时使用,比如使用SGD进行warm up,然后Adam
  • 对于比较奇怪的需求,deepbit两个loss的收敛需要进行控制的情况,比较慢的SGD比较适用

3 学习策略

// The learning rate decay policy. The currently implemented learning rate
  // policies are as follows:
  //    - fixed: always return base_lr.
  //    - step: return base_lr * gamma ^ (floor(iter / step))
  //    - exp: return base_lr * gamma ^ iter
  //    - inv: return base_lr * (1 + gamma * iter) ^ (- power)
  //    - multistep: similar to step but it allows non uniform steps defined by
  //      stepvalue
  //    - poly: the effective learning rate follows a polynomial decay, to be
  //      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)
  //    - sigmoid: the effective learning rate follows a sigmod decay
  //      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
  //
  // where base_lr, max_iter, gamma, step, stepvalue and power are defined
  // in the solver parameter protocol buffer, and iter is the current iteration.
--------------------- 
作者:cuijyer 
来源:CSDN 
原文:https://blog.csdn.net/cuijyer/article/details/78195178 
版权声明:本文为博主原创文章,转载请附上博文链接!

lr_policy可以设置为下面这些值,相应的学习率的计算为:

  • fixed:   保持base_lr不变.
  • step:    如果设置为step,则还需要设置一个stepsize, 返回 base_lr * gamma ^ (floor(iter / stepsize)),其中iter表示当前的迭代次数
  • exp:   返回base_lr * gamma ^ iter, iter为当前迭代次数
  • inv:   如果设置为inv,还需要设置一个power, 返回base_lr * (1 + gamma * iter) ^ (- power)
  • multistep: 如果设置为multistep,则还需要设置一个stepvalue。这个参数和step很相似,step是均匀等间隔变化,而multistep则是根据 stepvalue值变化
  • poly:    学习率进行多项式误差, 返回 base_lr (1 - iter/max_iter) ^ (power)
  • sigmoid: 学习率进行sigmod衰减,返回 base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))

3.1 fixed

base_lr: 0.01
lr_policy: "fixed"
max_iter: 400000

3.2 step

base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 30
max_iter: 100

3.3 exp

base_lr: 0.01
lr_policy: "exp"
gamma: 0.1
max_iter: 100


3.4 inv

base_lr: 0.01
lr_policy: "inv"
gamma: 0.1
power: 0.75
max_iter: 10000

3.5 multistep

base_lr: 0.01
lr_policy: "multistep"
gamma: 0.5
stepvalue: 1000
stepvalue: 3000
stepvalue: 4000
stepvalue: 4500
stepvalue: 5000
max_iter: 6000
--------------------- 
作者:cuijyer 
来源:CSDN 
原文:https://blog.csdn.net/cuijyer/article/details/78195178 
版权声明:本文为博主原创文章,转载请附上博文链接!

3.6 poly

base_lr: 0.01
lr_policy: "poly"
power: 0.5
max_iter: 10000

3.7 sigmoid

base_lr: 0.01
lr_policy: "sigmoid"
gamma: -0.001
stepsize: 5000
max_iter: 10000

4 solver文件配置总结

4.1 classify sample

分类训练中,solver文件配置共有5个主要部分的参数配置,具体内容如下:

  • net
  • test
  • lr_policy
  • snapshot
  • solver

具体的示例如下图所示:


# ------- 1. config net ---------
net: "examples/mnist/lenet_train_test.prototxt"
# ------- 2. config test --------
test_iter: 100
test_interval: 500
# ------ 3. config lr_policy --------
base_lr: 1.0
lr_policy: "fixed"
weight_decay: 0.0005
display: 100
max_iter: 10000
# ------4. config snapshot --------
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet_adadelta"
# ----- 5. config solver type -------
solver_mode: GPU
type: "AdaDelta"
delta: 1e-6
momentum: 0.95

4.2 detection sample

检测训练中,solver文件配置共有6个主要部分的参数配置,具体内容如下:

  • net
  • test
  • lr_policy
  • snapshot
  • solve
  • other

具体的示例如下图所示:

# -------1. config net ----------
train_net: "example/MobileNetSSD_train.prototxt"
test_net: "example/MobileNetSSD_test.prototxt"
# -------2. config test --------
test_iter: 673
test_interval: 10000
# -------3. config lr_policy ------
base_lr: 0.0005
display: 10
max_iter: 120000
lr_policy: "multistep"
gamma: 0.5
weight_decay: 0.00005
stepvalue: 20000
stepvalue: 40000
# -------4. config snapshot ------
snapshot: 1000
snapshot_prefix: "snapshot/mobilenet"
snapshot_after_train: true
# -------5. config solver -------
solver_mode: GPU
type: "RMSProp"
# -------6. other -------
eval_type: "detection"
ap_version: "11point"
test_initialization: false
debug_info: false
average_loss: 10
iter_size: 1

5. classify

待后续完善

6. detection

待后续完善

6.1 YOLO

6.2 SSD

References

[1] Caffe学习系列(10):命令行解析 [https://www.cnblogs.com/denny402/p/5076285.html]

[2] Caffe 的深度学习训练全过程 [https://www.infoq.cn/article/whole-process-of-caffe-depth-learning-training]

[3] Caffe学习系列(8):solver优化方法
[https://www.cnblogs.com/denny402/p/5074212.html]

[4] 图示caffe的solver中不同的学习策略(lr_policy) [https://blog.csdn.net/cuijyer/article/details/78195178 ]

[5] 预训练模型 [https://github.com/BVLC/caffe/wiki/Model-Zoo]

[6] Fine-tuning a Pretrained Network for Style Recognition [https://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/02-fine-tuning.ipynb]

[7] http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html

[8] caffe中fine-tuning模型三重天(函数详解、框架简述)+微调技巧 [https://blog.csdn.net/sinat_26917383/article/details/54999868]

[9] 常见优化算法 (caffe和tensorflow对应参数) [https://blog.csdn.net/csyanbin/article/details/53460627]

你可能感兴趣的:(CV)