DQN 中的梯度 clip

首先看这个 https://stackoverflow.com/questions/36462962/loss-clipping-in-tensor-flow-on-deepminds-dqn

DQN 文章中提到的 clip 并不是 梯度 clip。

首先看一下 tensorflow 1 中的 huber_loss,令 d = 1。

  0.5 * x^2                  if |x| <= d
  0.5 * d^2 + d * (|x| - d)  if |x| > d

其导数为

f'(x) = x    if x in [-1, 1]
f'(x) = +1    if x > +1
f'(x) = -1    if x < -1

l o s s = f ( y i − y i ^ ) l o s s ′ = f ′ ( y i − y i ^ ) ⋅ y i ′ = f ′ ( z ) ⋅ y i ′ loss = f(y_i-\hat{y_i}) \\ loss'=f'(y_i-\hat{y_i})\cdot y_i'=f'(z)\cdot y_i' loss=f(yiyi^)loss=f(yiyi^)yi=f(z)yi

DQN 文中提到

We also found it helpful to clip the error term from the update to be between −1 and 1. Because the absolute value loss function |x| has a derivative of −1 for all negative values of x and a derivative of 1 for all positive values of x, clipping the squared error to be between −1 and 1 corresponds to using an absolute value loss function for errors outside of the (−1,1) interval. This form of error clipping further improved the stability of the algorithm.

但这个指的是 f ′ ( z ) f'(z) f(z) 处于 -1 和 1 之间。而梯度 clip 则是把 f ′ ( z ) ⋅ y i ′ f'(z)\cdot y'_i f(z)yi 整体处于 -1 到 1 之间。

Use the Huber loss function, the Mnih et al 2013 paper called this error clipping.

你可能感兴趣的:(强化学习,tensorflow)