The Convergence Rate of SGD's Final Iterate: Analysis on Dimension Dependence
Abstract
Stochastic Gradient Descent (SGD) is among the simplest and most popular methods in optimization. The convergence rate for SGD has been extensively studied and tight analyses have been established for the running average scheme, but the suboptimality of the final iterate is still not wellunderstood. shamir2013stochastic gave the best known upper bound for the final iterate of SGD minimizing nonsmooth convex functions, which is $O(\log T/\sqrt{T})$ for Lipschitz convex functions and $O(\log T/ T)$ with additional assumption on strongly convexity. The best known lower bounds, however, are worse than the upper bounds by a factor of $\log T$. harvey2019tight gave matching lower bounds but their construction requires dimension $d= T$. It was then asked by koren2020open how to characterize the finaliterate convergence of SGD in the constant dimension setting. In this paper, we answer this question in the more general setting for any $d\leq T$, proving $\Omega(\log d/\sqrt{T})$ and $\Omega(\log d/T)$ lower bounds for the suboptimality of the final iterate of SGD in minimizing nonsmooth Lipschitz convex and strongly convex functions respectively with standard step size schedules. Our results provide the first general dimension dependent lower bound on the convergence of SGD's final iterate, partially resolving a COLT open question raised by koren2020open. We also present further evidence to show the correct rate in one dimension should be $\Theta(1/\sqrt{T})$, such as a proof of a tight $O(1/\sqrt{T})$ upper bound for onedimensional special cases in settings more general than koren2020open.
 Publication:

arXiv eprints
 Pub Date:
 June 2021
 arXiv:
 arXiv:2106.14588
 Bibcode:
 2021arXiv210614588L
 Keywords:

 Mathematics  Optimization and Control;
 Computer Science  Machine Learning;
 Statistics  Machine Learning