optimizer个人总结

optimizer个人总结_第1张图片

optimizer = SGD + Learning Rate scheduler
机器之心:Adagrad & 优化器发展历程
Paper: An overview of gradient descent optimization algorithms
简书:Adam
知乎:文本分类问题常见的性能&效果Tricks

引用:
1.在文本分类任务中,有哪些论文中很少提及却对性能有重要影响的tricks?https://www.zhihu.com/question/265357659
2.正则项相关:
https://www.zhihu.com/question/30231749
L1、L2正则化 https://zhuanlan.zhihu.com/p/35893078
知乎L2 https://www.zhihu.com/question/30231749
L0,L1,L2范数定义 https://zhuanlan.zhihu.com/p/28023308
3.fast ai 注意力和transformer课程
https://www.fast.ai/2019/07/08/fastai-nlp/
4.Adam家族
https://blog.csdn.net/u012535605/article/details/83579214
admw: https://zhuanlan.zhihu.com/p/40814046
admw: https://zhuanlan.zhihu.com/p/63982470
admw & admsgrad: https://www.fast.ai/2018/07/02/adam-weight-decay/
layer-wise adamw batch https://medium.com/nvidia-ai/a-guide-to-optimizer-implementation-for-bert-at-scale-8338cc7f45fd
bert admw源码https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/optimization.py#L87

你可能感兴趣的:(optimizer个人总结)