learning rate四种改变方式

Fixed

learning rate固定不变

base_lr = 0.01
lr_policy = "fixed"

Step

learning rate在每迭代stepsize次后减少gamma倍。 lr=lr×gamma

base_lr = 0.01
lr_policy = "step"
gamma = 0.1
stepsize= 10000

Polynomial

learning rate呈多项式曲线下降。 LR(t)=base_lr×(tT)power

base_lr = 0.01
lr_policy = "poly"
power = 0.5

Inv

learning rate随迭代次数增加而下降。 LR(t)=base_lr×(1+gamma×iter)power

base_lr = 0.01
lr_policy = "Inv"
gamma = 0.0001
power = 0.75

你可能感兴趣的:(Deep,Learning)