Improving SAGA via a Probabilistic Interpolation with Gradient Descent
Abstract
We develop and analyze a new algorithm for empirical risk minimization, which is the key paradigm for training supervised machine learning models. Our methodSAGDis based on a probabilistic interpolation of SAGA and gradient descent (GD). In particular, in each iteration we take a gradient step with probability $q$ and a SAGA step with probability $1q$. We show that, surprisingly, the total expected complexity of the method (which is obtained by multiplying the number of iterations by the expected number of gradients computed in each iteration) is minimized for a nontrivial probability $q$. For example, for a well conditioned problem the choice $q=1/(n1)^2$, where $n$ is the number of data samples, gives a method with an overall complexity which is better than both the complexity of GD and SAGA. We further generalize the results to a probabilistic interpolation of SAGA and minibatch SAGA, which allows us to compute both the optimal probability and the optimal minibatch size. While the theoretical improvement may not be large, the practical improvement is robustly present across all synthetic and real data we tested for, and can be substantial. Our theoretical results suggest that for this optimal minibatch size our method achieves linear speedup in minibatch size, which is of key practical importance as minibatch implementations are used to train machine learning models in practice. Moreover, empirical evidence suggest that a linear speedup in minibatch size can be attained with a parallel implementation.
 Publication:

arXiv eprints
 Pub Date:
 June 2018
 DOI:
 10.48550/arXiv.1806.05633
 arXiv:
 arXiv:1806.05633
 Bibcode:
 2018arXiv180605633B
 Keywords:

 Mathematics  Optimization and Control