Sensitivity Assisted Alternating Directions Method of Multipliers for Distributed Optimization and Statistical Learning
Abstract
This paper considers the problem of distributed model fitting using the alternating directions method of multipliers (ADMM). ADMM splits the learning problem into several smaller subproblems, usually by partitioning the data samples. The different subproblems can be solved in parallel by a set of worker computing nodes coordinated by a master node, and the subproblems are repeatedly solved until convergence. At each iteration, the worker nodes must solve a convex optimization problem whose difficulty increases with the size of the problem. In this paper, we propose a sensitivityassisted ADMM algorithm that leverages the parametric sensitivities such that the subproblems solutions can be approximated using a tangential predictor, thus easing the computational burden to computing one linear solve. We study the convergence properties of the proposed sensitivityassisted ADMM algorithm. The numerical performance of the algorithm is illustrated on a nonlinear parameter estimation problem, and a multilayer perceptron learning problem.
 Publication:

arXiv eprints
 Pub Date:
 September 2020
 DOI:
 10.48550/arXiv.2009.05845
 arXiv:
 arXiv:2009.05845
 Bibcode:
 2020arXiv200905845K
 Keywords:

 Mathematics  Optimization and Control