On the NearOptimality of Local Policies in Large Cooperative MultiAgent Reinforcement Learning
Abstract
We show that in a cooperative $N$agent network, one can design locally executable policies for the agents such that the resulting discounted sum of average rewards (value) well approximates the optimal value computed over all (including nonlocal) policies. Specifically, we prove that, if $\mathcal{X}, \mathcal{U}$ denote the size of state, and action spaces of individual agents, then for sufficiently small discount factor, the approximation error is given by $\mathcal{O}(e)$ where $e\triangleq \frac{1}{\sqrt{N}}\left[\sqrt{\mathcal{X}}+\sqrt{\mathcal{U}}\right]$. Moreover, in a special case where the reward and state transition functions are independent of the action distribution of the population, the error improves to $\mathcal{O}(e)$ where $e\triangleq \frac{1}{\sqrt{N}}\sqrt{\mathcal{X}}$. Finally, we also devise an algorithm to explicitly construct a local policy. With the help of our approximation results, we further establish that the constructed local policy is within $\mathcal{O}(\max\{e,\epsilon\})$ distance of the optimal policy, and the sample complexity to achieve such a local policy is $\mathcal{O}(\epsilon^{3})$, for any $\epsilon>0$.
 Publication:

arXiv eprints
 Pub Date:
 September 2022
 DOI:
 10.48550/arXiv.2209.03491
 arXiv:
 arXiv:2209.03491
 Bibcode:
 2022arXiv220903491U
 Keywords:

 Computer Science  Machine Learning;
 Computer Science  Multiagent Systems
 EPrint:
 Transactions on Machine Learning Research, 2022