Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
Abstract
We consider stochastic gradient descent (SGD) for leastsquares regression with potentially several passes over the data. While several passes have been widely reported to perform practically better in terms of predictive performance on unseen data, the existing theoretical analysis of SGD suggests that a single pass is statistically optimal. While this is true for lowdimensional easy problems, we show that for hard problems, multiple passes lead to statistically optimal predictions while single pass does not; we also show that in these hard models, the optimal number of passes over the data increases with sample size. In order to define the notion of hardness and show that our predictive performances are optimal, we consider potentially infinitedimensional models and notions typically associated to kernel methods, namely, the decay of eigenvalues of the covariance matrix of the features and the complexity of the optimal predictor as measured through the covariance matrix. We illustrate our results on synthetic experiments with nonlinear kernel methods and on a classical benchmark with a linear model.
 Publication:

arXiv eprints
 Pub Date:
 May 2018
 arXiv:
 arXiv:1805.10074
 Bibcode:
 2018arXiv180510074P
 Keywords:

 Computer Science  Machine Learning;
 Mathematics  Optimization and Control;
 Mathematics  Statistics Theory;
 Statistics  Machine Learning
 EPrint:
 Neural Information Processing Systems (NIPS), Dec 2018, Montr{\'e}al, Canada. 2018