solver.prototxt解析

*_slover.prototxt

net: "test.prototxt"
#训练网络的配置文件
test_iter: 100
#test_iter 指明在测试阶段有多上个前向过程(也就是有多少图片)被执行。
在MNIST例子里,在网络配置文件里已经设置test网络的batch size=100,这里test_iter
设置为100,那在测试阶段共有100*100=10000 图片被处理
test_interval: 500
#每500次训练迭代后,执行一次test
base_lr: 0.01
#学习率初始化为0.01
momentum:0.9
#u=0.9
weight_decay:0.0005
#
lr_policy: "inv"
gamma: 0.0001
power: 0.75
#以上三个参数都和降低学习率有关,详细的学习策略和计算公式见下面
// The learning rate decay policy. The currently implemented learning rate
  // policies are as follows:
  //    - fixed: always return base_lr.
  //    - step: return base_lr * gamma ^ (floor(iter / step))
  //    - exp: return base_lr * gamma ^ iter
    - inv: return base_lr * (1 + gamma * iter) ^ (- power)
  //    - multistep: similar to step but it allows non uniform steps defined by
  //      stepvalue
  //    - poly: the effective learning rate follows a polynomial decay, to be
  //      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)
  //    - sigmoid: the effective learning rate follows a sigmod decay
  //      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
  // where base_lr, max_iter, gamma, step, stepvalue and power are defined
  // in the solver parameter protocol buffer, and iter is the current iteration.
display:100
#每100次迭代,显示结果
snapshot: 5000
#每5000次迭代,保存一次快照
snapshot_prefix: "path_prefix"
#快照保存前缀:更准确的说是快照保存路径+前缀,应为文件名后的名字是固定的
solver_mode:GPU
#选择解算器是用cpu还是gpu

批处理文件编写:

F:/caffe/caffe-windows-master/bin/caffe.exe train --solver=C:/Users/Administrator/Desktop/caffe_test/cifar-10/cifar10_slover_prototxt --gpu=all
pause

你可能感兴趣的:(深度学习,卷积神经,caffe,卷积神经网络,CNN)