强化学习之分类与重点paper 1

重要的事情说三遍!
转载:https://www.jianshu.com/p/aeb0fd6da40f
转载:https://www.jianshu.com/p/aeb0fd6da40f
转载:https://www.jianshu.com/p/aeb0fd6da40f

强化学习是目前热门的研究方向。对不同强化学习的方法与paper进行分类有助于我们进一步了解针对不同的应用场景,如何使用合适的强化学习方法。本文将对强化学习进行分类并列出对应的paper。

1. Model free RL

a. Deep Q-Learning系列

算法名称:DQN
论文标题:Playing Atari with Deep Reinforcement Learning
发表会议:NIPS Deep Learning Workshop, 2013.
论文链接:https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
当前谷歌学术引用次数:5942


算法名称:Deep Recurrent Q-Learning
论文标题:Deep Recurrent Q-Learning for Partially Observable MDPs
发表会议:AAAI Fall Symposia, 2015
论文链接:https://arxiv.org/abs/1507.06527
当前谷歌学术引用次数:877


算法名称:Dueling DQN
论文标题:Dueling Network Architectures for Deep Reinforcement Learning
发表会议:ICML, 2016
论文链接:https://arxiv.org/abs/1511.06581
当前谷歌学术引用次数:1728


算法名称:Double DQN
论文标题:Deep Reinforcement Learning with Double Q-learning
发表会议:AAAI, 2016
论文链接:https://arxiv.org/abs/1509.06461
当前谷歌学术引用次数:3213


算法名称:Prioritized Experience Replay (PER)
论文标题:Prioritized Experience Replay
发表会议:ICLR, 2016
论文链接:https://arxiv.org/abs/1511.05952
当前谷歌学术引用次数:1914


算法名称:Rainbow DQN
论文标题:Rainbow: Combining Improvements in Deep Reinforcement Learning
发表会议:AAAI, 2018
论文链接:https://arxiv.org/abs/1710.02298
当前谷歌学术引用次数:903


b. Policy Gradients系列

算法名称:A3C
论文标题:Asynchronous Methods for Deep Reinforcement Learning
发表会议:ICML, 2016
论文链接:https://arxiv.org/abs/1602.01783
当前谷歌学术引用次数:4739


算法名称:TRPO
论文标题:Trust Region Policy Optimization
发表会议:ICML, 2015
论文链接:https://arxiv.org/abs/1502.05477
当前谷歌学术引用次数:3357


算法名称:GAE
论文标题:High-Dimensional Continuous Control Using Generalized Advantage Estimation
发表会议:ICLR, 2016
论文链接:https://arxiv.org/abs/1506.02438
当前谷歌学术引用次数:1264


算法名称:PPO-Clip, PPO-Penalty
论文标题:Proximal Policy Optimization Algorithms
发表会议:Arxiv
论文链接:https://arxiv.org/abs/1707.06347
当前谷歌学术引用次数:4059


算法名称:PPO-Penalty
论文标题:Emergence of Locomotion Behaviours in Rich Environments
发表会议:Arxiv
论文链接:https://arxiv.org/abs/1707.02286
当前谷歌学术引用次数:528


算法名称:ACKTR
论文标题:Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
发表会议:NIPS, 2017
论文链接:https://arxiv.org/abs/1708.05144
当前谷歌学术引用次数:408


算法名称:ACER
论文标题:Sample Efficient Actor-Critic with Experience Replay
发表会议:ICLR, 2017
论文链接:https://arxiv.org/abs/1611.01224
当前谷歌学术引用次数:486


算法名称:SAC
论文标题:Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
发表会议:ICML, 2018
论文链接:https://arxiv.org/abs/1801.01290
当前谷歌学术引用次数:1447


c. Deterministic Policy Gradients系列

算法名称:DPG
论文标题:Deterministic Policy Gradient Algorithms
发表会议:ICML, 2014
论文链接:http://proceedings.mlr.press/v32/silver14.pdf
当前谷歌学术引用次数:1991


算法名称:DDPG
论文标题:Continuous Control With Deep Reinforcement Learning
发表会议:ICLR, 2016
论文链接:https://arxiv.org/abs/1509.02971
当前谷歌学术引用次数:5539


算法名称:TD3
论文标题:Addressing Function Approximation Error in Actor-Critic Methods
发表会议:ICML, 2018
论文链接:https://arxiv.org/abs/1802.09477
当前谷歌学术引用次数:839

d. Distributional RL系列

算法名称:C51
论文标题:A Distributional Perspective on Reinforcement Learning
发表会议:ICML, 2017
论文链接:https://arxiv.org/abs/1707.06887
当前谷歌学术引用次数:600


算法名称:QR-DQN
论文标题:Distributional Reinforcement Learning with Quantile Regression
发表会议:AAAI, 2018
论文链接:https://arxiv.org/abs/1710.10044
当前谷歌学术引用次数:188


算法名称:IQN
论文标题:Implicit Quantile Networks for Distributional Reinforcement Learning
发表会议:ICML, 2018
论文链接:https://arxiv.org/abs/1806.06923
当前谷歌学术引用次数:139


算法名称:Dopamine
论文标题:Dopamine: A Research Framework for Deep Reinforcement Learning
发表会议:ICLR, 2019
论文链接:https://openreview.net/forum?id=ByG_3s09KX
当前谷歌学术引用次数:107


e. Policy Gradients with Action-Dependent Baselines系列

算法名称:Q-Prop
论文标题:Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
发表会议:ICLR, 2017
论文链接:https://arxiv.org/abs/1611.02247
当前谷歌学术引用次数:259


算法名称:Stein Control Variates
论文标题:Action-depedent Control Variates for Policy Optimization via Stein’s Identity
发表会议:ICLR, 2018
论文链接:https://arxiv.org/abs/1710.11198
当前谷歌学术引用次数:46


算法名称:The Mirage of Action-Dependent Baselines in Reinforcement Learning
论文标题:The Mirage of Action-Dependent Baselines in Reinforcement Learning
发表会议:ICML, 2018
论文链接:https://arxiv.org/abs/1802.10031
当前谷歌学术引用次数:66


f. Path-Consistency Learning系列

算法名称:PCL
论文标题:Bridging the Gap Between Value and Policy Based Reinforcement Learning
发表会议:NIPS, 2017
论文链接:https://arxiv.org/abs/1702.08892
当前谷歌学术引用次数:223


算法名称:Trust-PCL
论文标题:Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
发表会议:ICLR, 2018
论文链接:https://arxiv.org/abs/1707.01891
当前谷歌学术引用次数:68


g. Other Directions for Combining Policy-Learning and Q-Learning系列

算法名称:PGQL
论文标题:Combining Policy Gradient and Q-learning
发表会议:ICLR, 2017
论文链接:https://arxiv.org/abs/1611.01626
当前谷歌学术引用次数:58


算法名称:Reactor
论文标题:The Reactor: A Fast and Sample-Efficient Actor-Critic Agent for Reinforcement Learning
发表会议:ICLR, 2018
论文链接:https://arxiv.org/abs/1704.04651
当前谷歌学术引用次数:42


算法名称:IPG
论文标题:Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning
发表会议:NIPS, 2017
论文链接:http://papers.nips.cc/paper/6974-interpolated-policy-gradient-merging-on-policy-and-off-policy-gradient-estimation-for-deep-reinforcement-learning
当前谷歌学术引用次数:117


算法名称:Equivalence Between Policy Gradients and Soft Q-Learning
论文标题:Equivalence Between Policy Gradients and Soft Q-Learning
发表会议:Arxiv
论文链接:https://arxiv.org/abs/1704.06440
当前谷歌学术引用次数:170


h. Evolutionary Algorithms

算法名称:ES
论文标题:Evolution Strategies as a Scalable Alternative to Reinforcement Learning
发表会议:Arxiv
论文链接:https://arxiv.org/abs/1703.03864
当前谷歌学术引用次数:802

参考
https://spinningup.openai.com/en/latest/

你可能感兴趣的:(强化学习,强化学习,神经网络,深度学习,算法,机器学习)