Network Newton
Abstract
We consider minimization of a sum of convex objective functions where the components of the objective are available at different nodes of a network and nodes are allowed to only communicate with their neighbors. The use of distributed subgradient or gradient methods is widespread but they often suffer from slow convergence since they rely on first order information, which leads to a large number of local communications between nodes in the network. In this paper we propose the Network Newton (NN) method as a distributed algorithm that incorporates second order information via distributed evaluation of approximations to Newton steps. We also introduce adaptive (A)NN in order to establish exact convergence. Numerical analyses show significant improvement in both convergence time and number of communications for NN relative to existing (first order) alternatives.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2014
- DOI:
- 10.48550/arXiv.1412.3740
- arXiv:
- arXiv:1412.3740
- Bibcode:
- 2014arXiv1412.3740M
- Keywords:
-
- Mathematics - Optimization and Control
- E-Print:
- (to appear in Asilomar Conference on signals, systems, and computers 2014)