Distributed Stochastic Approximation for Constrained and Unconstrained Optimization
Abstract
In this paper, we analyze the convergence of a distributed RobbinsMonro algorithm for both constrained and unconstrained optimization in multiagent systems. The algorithm searches for local minima of a (nonconvex) objective function which is supposed to coincide with a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. It is proved that i) an agreement is achieved between agents on the value of the estimate, ii) the algorithm converges to the set of KuhnTucker points of the optimization problem. The proof relies on recent results about differential inclusions. In the context of unconstrained optimization, intelligible sufficient conditions are provided in order to ensure the stability of the algorithm. In the latter case, we also provide a central limit theorem which governs the asymptotic fluctuations of the estimate. We illustrate our results in the case of distributed power allocation for adhoc wireless networks.
 Publication:

arXiv eprints
 Pub Date:
 April 2011
 arXiv:
 arXiv:1104.2773
 Bibcode:
 2011arXiv1104.2773B
 Keywords:

 Computer Science  Information Theory
 EPrint:
 7 pages