RAdam的keras实现

简介

Rectified Adam是最新提出的效果最优的adaptive stochastic优化器,超越了原始的Adam,稳定性也比warmup版本的Adam效果要好。原始论文地址:https://arxiv.org/abs/1908.03265
本文主要记录RAdam的Keras实现。

Keras实现

继承自原始的Keras的Adam类.
file: radam.py

#coding=utf8
"""Recifited Adam optimizer
# Author : forin-xyz
# Created Time : Aug 24 22:02:55 2019
# Description:
"""

from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
from __future__ import unicode_literals


from keras import backend as K
from keras.optimizers import Adam
from keras.legacy import interfaces


class RAdam(Adam):
    """RAdam optimizer, also named Recifited Adam optimizer.
    Arguments
    ---------
        lr: float >= 0. Learning rate, default 0.001.
        beta_1: float, (0, 1). Generally close to 1.
        beta_2: float, (0, 1). Generally close to 1.
        epsilon: float >= 0. Fuzz factor, a negligible value (
            e.g. 1e-8), defaults to `K.epsilon()`.
        decay: float >= 0. Learning rate decay over each update.
    References
    ----------
      - [On the Variance of the Adaptive Learing Rate and Beyond](
         https://arxiv.org/abs/1908.03265)
    """

    @interfaces.legacy_get_updates_support
    def get_updates(self, loss, params):
        grads = self.get_gradients(loss, params)
        self.updates = [K.update_add(self.iterations, 1)]

        lr = self.lr
        if self.initial_decay:
            lr = lr * (1. / (1. + self.decay * K.cast(
                self.iterations, K.dtype(self.decay)
            )))

        t = K.cast(self.iterations, K.floatx()) + 1.
        beta_1 = self.beta_1
        beta_2 = self.beta_2
        beta_1_t = K.pow(beta_1, t)
        beta_2_t = K.pow(beta_2, t)
        rho_inf = 2. / (1. - beta_2) - 1.
        rho_t = rho_inf - 2. * t * beta_2_t / (1. - beta_2_t)
        r_t = K.sqrt(
            K.relu(rho_t - 4.) * (rho_t - 2.) * rho_inf / (
                K.relu(rho_inf - 4.) * (rho_inf - 2.) * rho_t )
        )
        flag = K.cast(rho_t > 4., K.floatx())

        ms = [K.zeros(K.int_shape(p)) for p in params]
        vs = [K.zeros(K.int_shape(p)) for p in params]

        self.weights = [self.iterations] + ms + vs
        for p, g, m, v in zip(params, grads, ms, vs):
            m_t = beta_1 * m + (1. - beta_1) * g
            v_t = beta_2 * v + (1. - beta_2) * K.square(g)

            m_hat_t = m_t / (1. - beta_1_t)
            v_hat_t = K.sqrt(v_t / (1. - beta_2_t))
            new_p = p - lr * (r_t / (v_hat_t + self.epsilon) + flag - 1.)* m_hat_t

            if getattr(p, "constraint", None) is not None:
                new_p = p.constraint(new_p)

            self.updates.append(K.update(p, new_p))
            self.updates.append(K.update(m, m_t))
            self.updates.append(K.update(v, v_t))
        return self.updates


del division
del print_function
del absolute_import
del unicode_literals

测试脚本

file: test_radam.py

#coding=utf8
"""
# Author : forin-xyz
# Created Time : Aug 24 22:44:16 2019
# Description:
"""

from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
from __future__ import unicode_literals


import numpy as np
from radam import RAdam
from keras.models import Sequential
from keras import layers as L
import keras.backend as K
import math

def gelu(x):
    return 0.5 * x * (1 + K.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * K.pow(x, 3))))


X = np.random.standard_normal((64 * 1000, 25))
y = np.int64(np.sum(X * X, axis=1) > 25.)

model = Sequential()
model.add(L.Dense(40, input_shape=(25,), activation=gelu))
model.add(L.Dense(64, input_shape=(40,), activation=gelu))
model.add(L.Dense(32, input_shape=(64,), activation=gelu))
model.add(L.Dropout(0.2))
model.add(L.Dense(1, activation="sigmoid"))

model.compile(RAdam(1e-4), loss="binary_crossentropy", metrics=["acc"])
model.fit(X, y, epochs=50, validation_split=0.05)


del division
del print_function
del absolute_import
del unicode_literals

github仓库地址:https://github.com/forin-xyz/keras_radam

算法简介

RAdam的keras实现_第1张图片

RAdam算法的优点

跟Adam相比

  1. 有更好的性能,在某些数据集上会超过Adam,在某些数据集上达到Adam; 因为原始的Adam在训练前期可能因为样本较少落入局部最优值范围,RAdam因为在训练步骤较少时使用带动量的SGD更新从而避免了落入了局部最优值深坑;
  2. 更稳定,相比Adam,RAdam可以在范围更广的learning_rate值下收敛到相似的效果。

跟带warmup-Adam相比

  1. 实现相似的效果但是只需要更少的超参数调整。

你可能感兴趣的:(机器学习,deep-learning)