Last Iterate is Slower than Averaged Iterate in Smooth ConvexConcave Saddle Point Problems
Abstract
In this paper we study the smooth convexconcave saddle point problem. Specifically, we analyze the last iterate convergence properties of the Extragradient (EG) algorithm. It is well known that the ergodic (averaged) iterates of EG converge at a rate of $O(1/T)$ (Nemirovski, 2004). In this paper, we show that the last iterate of EG converges at a rate of $O(1/\sqrt{T})$. To the best of our knowledge, this is the first paper to provide a convergence rate guarantee for the last iterate of EG for the smooth convexconcave saddle point problem. Moreover, we show that this rate is tight by proving a lower bound of $\Omega(1/\sqrt{T})$ for the last iterate. This lower bound therefore shows a quadratic separation of the convergence rates of ergodic and last iterates in smooth convexconcave saddle point problems.
 Publication:

arXiv eprints
 Pub Date:
 January 2020
 arXiv:
 arXiv:2002.00057
 Bibcode:
 2020arXiv200200057G
 Keywords:

 Computer Science  Machine Learning;
 Mathematics  Optimization and Control;
 Statistics  Machine Learning
 EPrint:
 27 pages