增强学习论文记录

< HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION >

John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan and Pieter Abbeel
Department of Electrical Engineering and Computer Science
University of California, Berkeley
{joschu,pcmoritz,levine,jordan,pabbeel}@eecs.berkeley.edu

  • 主要是说用GAE模拟Advantage 函数,降低variance. 通过使用参数控制一系列的actions对reward的影响范围 This observation suggests an interpretation of Equation(16): reshape the rewards using V to shrink the temporal extent of the response function,and then introduce a “steeper” discount γλ to cut off the noise arising from long delays, i.e., ignore terms θlogπθ(at|st)δVt+l where l>>1/(1γλ)

  • GAN: generalized advantage estimation

  • Two main challenges
    • large number of samples
    • difficulty of obtaining stable and steady improvement
  • 解决办法

    • using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(λ).
    • We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks.

  1. 提出了一族policy gradient estimators, 能显著减少variance while 将bias维持在在tolerable level. 被 γ[0,1]λ[0,1] 参数化的generalized advantage estimator (GAE)
  2. 一个更general的分析,能同时用于online和batch setting, 讨论了我们方法的解释as an instance of reward shapping

论文的三个贡献:

  • We provide justification and intuition for an effective variance reduction scheme for policy gradients, which we call generalized advantage estimation (GAE). While the formula has been proposed in prior work (Kimura & Kobayashi, 1998; Wawrzynski ´ , 2009), our analysis is novel and enables GAE to be applied with a more general set of algorithms, including the batch trust-region algorithm we use for our experiments.
  • We propose the use of a trust region optimization method for the value function, which we find is a robust and efficient way to train neural network value functions with thousands of parameters.
  • By combining (1) and (2) above, we obtain an algorithm that empirically is effective at learning neural network policies for challenging control tasks. The results extend the state of the art in using reinforcement learning for high-dimensional continuous control.
    Videos are available at
    https://sites.google.com/site/gaepapersupp.

增强学习论文记录_第1张图片


重点:

  • 后更新value function
  • The choice Ψt=Aπ(st,at) yields almost the lowest possible
    variance, though in parcticce, the advantage function is not known
    and must be estimated.
  • introduce a prameter γ to reduce variance by downweighting
    rewards对应delayde effects 但代价是introducing bias.

  • 增强学习论文记录_第2张图片

    g^=1Nn=1Nt=0A^ntθlogπθ(ant|snt)(9)
    其中的 n 代表batch序号

  • V 是近似value function, 定义 δVt=rt+γV(st+1)V(st) 可以当成是action at 的advantage估计

  • 增强学习论文记录_第3张图片 即代表了一部分a telescoping sum advantage
  • The generalized advantage estimator GAE !!!!!!!! 增强学习论文记录_第4张图片增强学习论文记录_第5张图片
  • 用GAE创造一个biased gγ 估计, 通过改写公式6 这里写图片描述
  • 具体算法:

增强学习论文记录_第6张图片


增强学习论文记录_第7张图片


增强学习论文记录_第8张图片

你可能感兴趣的:(视觉-增强学习)