A Fully Stochastic PrimalDual Algorithm
Abstract
A new stochastic primaldual algorithm for solving a composite optimization problem is proposed. It is assumed that all the functions/operators that enter the optimization problem are given as statistical expectations. These expectations are unknown but revealed across time through i.i.d. realizations. The proposed algorithm is proven to converge to a saddle point of the Lagrangian function. In the framework of the monotone operator theory, the convergence proof relies on recent results on the stochastic Forward Backward algorithm involving random monotone operators. An example of convex optimization under stochastic linear constraints is considered.
 Publication:

arXiv eprints
 Pub Date:
 January 2019
 DOI:
 10.48550/arXiv.1901.08170
 arXiv:
 arXiv:1901.08170
 Bibcode:
 2019arXiv190108170B
 Keywords:

 Mathematics  Optimization and Control;
 Statistics  Machine Learning