tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)
Applies exponential decay to the learning rate.
When training a model, it is often recommended to lower the learning rate as the training progresses. This function applies an exponential decay function to a provided initial learning rate. It requires a global_step
value to compute the decayed learning rate. You can just pass a TensorFlow variable that you increment at each training step.
The function returns the decayed learning rate. It is computed as:
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
If the argument staircase
is True
, then global_step /decay_steps
is an integer division and the decayed learning rate follows a staircase function.
Example: decay every 100000 steps with a base of 0.96:
...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.exponential_decay(starter_learning_rate, global_step,
100000, 0.96, staircase=True)
optimizer = tf.GradientDescent(learning_rate)
# Passing global_step to minimize() will increment it at each step.
optimizer.minimize(...my loss..., global_step=global_step)
learning_rate
: A scalar float32
or float64
Tensor
or a Python number. The initial learning rate.global_step
: A scalar int32
or int64
Tensor
or a Python number. Global step to use for the decay computation. Must not be negative.decay_steps
: A scalar int32
or int64
Tensor
or a Python number. Must be positive. See the decay computation above.decay_rate
: A scalar float32
or float64
Tensor
or a Python number. The decay rate.staircase
: Boolean. It True
decay the learning rate at discrete intervals.name
: string. Optional name of the operation. Defaults to 'ExponentialDecay'A scalar Tensor
of the same type as learning_rate
. The decayed learning rate.
Some training algorithms, such as GradientDescent and Momentum often benefit from maintaining a moving average of variables during optimization. Using the moving averages for evaluations often improve results significantly.
将指数衰减应用于学习速率。当训练一个模型时,通常建议在训练过程中降低学习速率。该函数将指数衰减函数应用于给定的初始学习速率。它需要一个global_step值来计算衰减的学习速率。你可以通过每个训练步骤中增加的一个TensorFlow变量。函数返回衰减的学习速率。它计算如下:
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
如果参数staircase
是 True
,那么global_step /decay_steps是一个整数除法,而衰减的学习速率遵循一个阶梯函数。
以0.96为基数的每10万步的衰变:
...
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.1
learning_rate = tf.exponential_decay(starter_learning_rate, global_step,
100000, 0.96, staircase=True)
optimizer = tf.GradientDescent(learning_rate)
# Passing global_step to minimize() will increment it at each step.
optimizer.minimize(...my loss..., global_step=global_step)
learning_rate
:最初的学习速率。一个float32或float64张量或一个Python类型数字。
global_step
:用于衰变计算的全局步数,不能是负数,int32,或者int64,或者一个Python类型数字。decay_steps
:int32或者int64的Tensor或者一个Python类型数字,参见上面的衰变计算。decay_rate
: nt32或者int64的Tensor或者一个Python类型数字,衰减速率staircase
: Boolean型.它是true时,在离散时间间隔内衰减学习速率。name
: string型.可选项,默认值为 'ExponentialDecay'返回值:
返回值类型与learn_rate返回值类型相同,返回衰减学习速率