Convergence of the Iterates in Mirror Descent Methods
Abstract
We consider centralized and distributed mirror descent algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic mirror descent and standard subgradient methods for the robust regression problem.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2018
- DOI:
- arXiv:
- arXiv:1805.01526
- Bibcode:
- 2018arXiv180501526D
- Keywords:
-
- Mathematics - Optimization and Control