Deep Reinforcement Learning from Self-Play in Imperfect-Information Games

作者:Johannes Heinrich [email protected]
David Silver [email protected]
University College London, UK
摘要:
Many real-world applications can be described as large-scale games of imperfect information. To deal with these challenging domains, prior
work has focused on computing Nash equilibria in a handcrafted abstraction of the domain.
In this paper we introduce the first scalable end-to-end approach to learning approximate Nash equilibria without any prior knowledge. Our method combines fictitious self-play with deep reinforcement learning. When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged. In Limit Texas Hold’em, a poker game of real world scale, NFSP learnt a competitive strategy
that approached the performance of human experts and state-of-the-art methods.

你可能感兴趣的:(Deep Reinforcement Learning from Self-Play in Imperfect-Information Games)