Option-Critic in Cooperative Multi-agent Systems
Abstract
In this paper, we investigate learning temporal abstractions in cooperative multi-agent systems, using the options framework (Sutton et al, 1999). First, we address the planning problem for the decentralized POMDP represented by the multi-agent system, by introducing a \emph{common information approach}. We use the notion of \emph{common beliefs} and broadcasting to solve an equivalent centralized POMDP problem. Then, we propose the Distributed Option Critic (DOC) algorithm, which uses centralized option evaluation and decentralized intra-option improvement. We theoretically analyze the asymptotic convergence of DOC and build a new multi-agent environment to demonstrate its validity. Our experiments empirically show that DOC performs competitively against baselines and scales with the number of agents.
- Publication:
-
arXiv e-prints
- Pub Date:
- November 2019
- DOI:
- 10.48550/arXiv.1911.12825
- arXiv:
- arXiv:1911.12825
- Bibcode:
- 2019arXiv191112825C
- Keywords:
-
- Computer Science - Artificial Intelligence;
- Computer Science - Multiagent Systems;
- Electrical Engineering and Systems Science - Systems and Control;
- Mathematics - Optimization and Control