Introduction to Coordination in Multi-Agent Reinforcement Learning

It is a fact that we live in a world involving interaction with others, including both cooperation and competition. Thus, it is attractive to apply reinforcement learning into multi-agent systems.

Introduction to Coordination in Multi-Agent Reinforcement Learning_第1张图片
Multi-agent System

Framework

Because the problem of math formula editor, I will give a picture showing the definition from the perspective of markov decision process.

Introduction to Coordination in Multi-Agent Reinforcement Learning_第2张图片
Multi-agent Reinforcement Learning

Advantages

There are many advantages of multiple agents acting in the systems.

  1. Explore Efficiently. There is a trade-off between exploration and exploitation in single agent reinforcement learning. How powerful it will be if there are multiple agents together to explore and communicate with each other, upon which the efficiency of sampling will be dramatically improved. For a recent research result, please see [1].

  2. Robust Securely. It is nor rare that some machines suddenly break down in reality, resulting in collapse of the systems. Thus, we need spare machines to avoid unexpected accidents. Thus, multi-agent reinforcement learning comes.

  3. Transfer and Lifelong Learning. By teaching and imitating, new agents can learn more faster than learning primitively.

  4. Cooperation and Competition. Some Tasks directly need us to cooperate to accomplish, like playing soccer, playing combat games and so on. By teamwork, it can tackle complicated environment. In addition, when it comes to the conflict of self-interest, we need to think about how to achieve best reward. Interesting phenomenons includes Nash Equilibrium.

Problems

We have talked about lots of advantages of multi-agent reinforcement learning. Now, what's the disadvantages or problems in multi-agent reinforcement learning?

  1. Huge State and Action Space. It is no doubt that the space of discrete state and action will grow exponentially with the number of agents, not to mention that the state abstraction and representation will be more tough.

  2. Partially Observation. Considering that the range single agent can perceive is small from the perspective of whole systems, there is problem of partial observation. Maybe agents need to communicate and then get a deal about the complete state information. If we think further, how to design the mechanism of communication channel among agents is also a trouble. For recent research results, please see [2] [3].

  3. Instability in Learning. Because the transition model is determined by all agents, the quality of policy singe agent has learned is affected by other agent's policies. Think when single agent do the same action again, only to find that the next state and reward, it will be confused and do not know how to learn. Under this constitution, the process of learning may be stuck in oscillation.

  4. Coordination and Cooperation. In the following picture, agents need to coordinate to escape obstacle and keep formation. That means agent 1 needs to know what's action agent 2 will choose in order to achieve best payoff. Vice Versa. It is impossible to complete such task by only choosing individual actions regardless of other's actions. It will be more complicated when agents need to coordinate with a series of actions.

Introduction to Coordination in Multi-Agent Reinforcement Learning_第3张图片
Coordination

Reference

[1] Maria and Benjamin. Coordinated Exploration in Concurrent Reinforcement Learning. ICML 2018.

[2] Jakob, Yannis, Nando and Shimon. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. NIPS 2016.

[3] Sainbayar, Arthur and Rob. Learning Multiagent Communication with Backpropagation. NIPS 2016.

你可能感兴趣的:(Introduction to Coordination in Multi-Agent Reinforcement Learning)