: model parameters at time step
• ∇() or : gradient at , used to compute +1
+1: momentum accumulated from time step 0 to
time step , which is used to compute +1
Find a to get the lowest σ (; )
Or, Find a to get the lowest ()
What if the gradients at the first few time steps are extremely large…
Exponential moving average (EMA) of squared gradients is not monotonically increasing
Adam vs SGDM
Adam vs SGDM
Adam vs SGDM
Adam:fast training, large generalization gap, unstable
• SGDM:stable, little generalization gap, better convergence(?)
Begin with Adam(fast), end with SGDM
Trouble shooting:
The “memory” of keeps roughly 1000 steps!!
In the final stage of training, most gradients are small and non-informative, while some mini-batches provide large informative gradient rarely
Adaptive learning rate algorithms:dynamically adjust learning rate over time
SGD-type algorithms:fix learning rate for all updates… too slow for small learning rates and bad result for large learning rates
Cyclical LR [Smith, WACV’17]
• learning rate:decide by LR range test
• stepsize:several epochs
• avoid local minimum by varying learning rate
• SGDR [Loshchilov, et al., ICLR’17]
Adam need warm-up
Experiments show that the gradient distribution distorted in the first 10 steps
Keep your step size small at the beginning of training helps to reduce the variance of the gradients
RAdam [Liu, et al., ICLR’20]
1 、effective memory size of EMA
2、max memory size (t → ∞)
3、**
**
Nesterov accelerated gradient (NAG) [Nesterov, jour Dokl. Akad. Nauk SSSR’83]
SGDM:
= −1 −
= −1 + ∇(−1)
Look into the future:
= −1 −
= −1 + ∇(−1 − −1)
Nesterov accelerated gradient (NAG):
= −1 −
= −1 + ∇(−1 − −1)
′ = −
= −1 − −
= −1 − − −1 − ∇(−1 − −1)
= −1’ − − ∇(−1′)
= −1 + ∇(−1′)
SGDM:
= −1 −
= −1 + ∇(−1)
or
= −1 − −1-∇(−1)
= −1 + ∇(−1)
Nadam [Dozat, ICLR workshop’16]
Normalization