x+= -learning_rate*dx
Momentum可以使SGD不至于陷入局部鞍点震荡,同时起到一定加速作用。
Momentum最开始有可能会偏离较远(overshooting the target),但是通常会慢慢矫正回来。
v = mu*v - learning_rate*dx
x+= v
基本思路是每次不在x位置求dx,而是在x+mu*v处更新dx,然后在用动量公式进行计算
相当于每次先到动量的位置,然后求梯度更新
v_prev = v
v = mu*v-learning_rate*dx
x += -mu*v_prev+(1+mu)*v
使用每个变量的历史梯度值累加作为更新的分母,起到平衡不同变量梯度数值差异过大的问题
cache += dx**2
x += -learning_rate*dx/(np.sqrt(cache)+1e-7)
在AdaGrad基础上加入了decay factor,防止历史梯度求和过大
cache = decay_rate*cache + (1-decay_rate)*dx**2
x += -learning_rate*dx/(np.sqrt(cache)+1e-7)
初始版本:类似于加入动量的RMSProp
m = beta1*m + (1-beta1)*dx
v = beta2*v + (1-beta2)*(dx**2)
x += -learning_rate*m / (np.sqrt(v)+1e-7)
真实的更新算法如下:
m = beta1*m + (1-beta1)*dx
v = beta2*v + (1-beta2)*(dx**2)
mb = m/(1-beta1**t) # t is step number
vb = v/(1-beta2**t)
x += -learning_rate*mb / (np.sqrt(vb)+1e-7)
mb和vb起到最开始的时候warm up作用,t很大之后(1-beta1**t) =1
second-order taylor expansion:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=self.learning_rate)
optimizer = tf.train.MomentumOptimizer(lr, 0.9)
optimizer = tf.train.AdagradientOptimizer(learning_rate=self.learning_rate)
optimizer = tf.train.RMSPropOptimizer(0.001, 0.9)
optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate, epsilon=1e-08)
部分局部参数需要查找tensorflow官方文档
直接进行优化
train_op = optimizer.minimize(loss)
获得提取进行截断等处理
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, self.max_gradient_norm)
train_op = optimizer.apply_gradients(zip(gradients, v), global_step=self.global_step)
caffe的优化需要在solver.prototxt中指定相应的参数
比较坑的是不同的版本之间type会有变化(ADAM or Adam),需要看具体代码
* Stochastic Gradient Descent (type: “SGD”),
* AdaDelta (type: “AdaDelta”),
* Adaptive Gradient (type: “AdaGrad”),
* Adam (type: “Adam”),
* Nesterov’s Accelerated Gradient (type: “Nesterov”) and
* RMSprop (type: “RMSProp”)
base_lr: 0.01
lr_policy: "step" # 也可以使用指数,多项式等等
gamma: 0.1
stepsize: 1000
max_iter: 3500
momentum: 0.9
net: "examples/mnist/lenet_train_test.prototxt"
test_iter: 100
test_interval: 500
base_lr: 1.0
lr_policy: "fixed"
momentum: 0.95
weight_decay: 0.0005
display: 100
max_iter: 10000
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet_adadelta"
solver_mode: GPU
type: "AdaDelta"
delta: 1e-6
net: "examples/mnist/mnist_autoencoder.prototxt"
test_state: { stage: 'test-on-train' }
test_iter: 500
test_state: { stage: 'test-on-test' }
test_iter: 100
test_interval: 500
test_compute_loss: true
base_lr: 0.01
lr_policy: "fixed"
display: 100
max_iter: 65000
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "examples/mnist/mnist_autoencoder_adagrad_train"
# solver mode: CPU or GPU
solver_mode: GPU
type: "AdaGrad"
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
weight_decay: 0.0005
momentum: 0.95
type: "Nesterov"
train_net: "nin_train_val.prototxt"
base_lr: 0.001
###############
##### step:base_lr * gamma ^ (floor(iter / stepsize))
#lr_policy: "step"
#gamma: 0.1
#stepsize: 25000
##### multi-step:
#lr_policy: "multistep"
#gamma: 0.5
#stepvalue: 1000
#stepvalue: 2000
#stepvalue: 3000
#stepvalue: 4000
#stepvalue: 5000
#stepvalue: 10000
#stepvalue: 20000
###### inv:base_lr * (1 + gamma * iter) ^ (- power)
# lr_policy: "inv"
# gamma: 0.0001
# power: 2
##### exp:base_lr * gamma ^ iter
# lr_policy: "exp"
# gamma: 0.9
##### poly:base_lr (1 - iter/max_iter) ^ (power)
# lr_policy: "poly"
# power: 0.9
##### sigmoid:base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
# lr_policy: "sigmoid"
# gamma: 0.9
#momentum: 0.9
solver_type: ADAM
momentum: 0.9
momentum2: 0.999
delta: 1e-8
lr_policy: "fixed"
display: 100
max_iter: 50000
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "./stage1/sgd_DeepBit1024_alex_stage1"
solver_mode: GPU
net: "examples/mnist/lenet_train_test.prototxt"
test_iter: 100
test_interval: 500
base_lr: 1.0
lr_policy: "fixed"
momentum: 0.95
weight_decay: 0.0005
display: 100
max_iter: 10000
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet_adadelta"
solver_mode: GPU
type: "RMSProp"
rms_decay: 0.98