深度学习优化算法大全系列6:Adam

1.Adam是啥

前面铺垫了这么多,终于铺垫到Adam了。作为最常用的优化器之一,很多同学可能都听说过Adam的名字,但是Adam是什么意思可能并不清楚。Adam其实包括两部分:Ada+M。其中,Ada就是我们前面提到的Adaptive,而M是我们一直在讲的Momentum。

结合我们前面提到的内容,SGD中的一阶动量计算方式:
m t = β 1 m t − 1 + ( 1 − β 1 ) g t m_t = \beta_1 m_{t-1} + (1-\beta_1)g_t mt=β1mt1+(1β1)gt
而在AdaDelta中,二阶动量为
V t = β 2 V t − 1 + ( 1 − β 2 ) g t 2 V_t = \beta_2 V_{t-1} + (1-\beta_2)g_t^2 Vt=β2Vt1+(1β2)gt2

2.结合源码分析

class Adam(Optimizer):
  """Adam optimizer.

  Default parameters follow those provided in the original paper.

  Arguments:
      lr: float >= 0. Learning rate.
      beta_1: float, 0 < beta < 1. Generally close to 1.
      beta_2: float, 0 < beta < 1. Generally close to 1.
      epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
      decay: float >= 0. Learning rate decay over each update.
      amsgrad: boolean. Whether to apply the AMSGrad variant of this algorithm
        from the paper "On the Convergence of Adam and Beyond".
  """

  def __init__(self,
               lr=0.001,
               beta_1=0.9,
               beta_2=0.999,
               epsilon=None,
               decay=0.,
               amsgrad=False,
               **kwargs):
    super(Adam, self).__init__(**kwargs)
    with K.name_scope(self.__class__.__name__):
      self.iterations = K.variable(0, dtype='int64', name='iterations')
      self.lr = K.variable(lr, name='lr')
      self.beta_1 = K.variable(beta_1, name='beta_1')
      self.beta_2 = K.variable(beta_2, name='beta_2')
      self.decay = K.variable(decay, name='decay')
    if epsilon is None:
      epsilon = K.epsilon()
    self.epsilon = epsilon
    self.initial_decay = decay
    self.amsgrad = amsgrad
...

以上为tensorflow中Adam的源码。可以看出, β 1 , β 2 \beta_1, \beta2 β1,β2等参数值均按原始论文给出。

l r lr lr:学习率,默认值0.001
β 1 \beta_1 β1:控制一阶动量,默认值0.9
β 2 \beta_2 β2:控制二阶动量,默认值0.999
ϵ \epsilon ϵ: Fuzz factor,默认值1e-7

到此为止,我们平时常用的 β 1 , β 2 \beta_1, \beta_2 β1,β2参数是彻底搞清楚具体什么意思了。

你可能感兴趣的:(Adam,一阶动量,二阶动量,默认参数)