Modeling Sensorimotor Coordination as Multi-Agent Reinforcement Learning with Differentiable Communication
Abstract
Multi-agent reinforcement learning has shown promise on a variety of cooperative tasks as a consequence of recent developments in differentiable inter-agent communication. However, most architectures are limited to pools of homogeneous agents, limiting their applicability. Here we propose a modular framework for learning complex tasks in which a traditional monolithic agent is framed as a collection of cooperating heterogeneous agents. We apply this approach to model sensorimotor coordination in the neocortex as a multi-agent reinforcement learning problem. Our results demonstrate proof-of-concept of the proposed architecture and open new avenues for learning complex tasks and for understanding functional localization in the brain and future intelligent systems.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2019
- DOI:
- 10.48550/arXiv.1909.05815
- arXiv:
- arXiv:1909.05815
- Bibcode:
- 2019arXiv190905815J
- Keywords:
-
- Computer Science - Multiagent Systems;
- Computer Science - Artificial Intelligence