The Geometry of Sign Gradient Descent
Abstract
Signbased optimization methods have become popular in machine learning due to their favorable communication cost in distributed optimization and their surprisingly good performance in neural network training. Furthermore, they are closely connected to socalled adaptive gradient methods like Adam. Recent works on signSGD have used a nonstandard "separable smoothness" assumption, whereas some older works study sign gradient descent as steepest descent with respect to the $\ell_\infty$norm. In this work, we unify these existing results by showing a close connection between separable smoothness and $\ell_\infty$smoothness and argue that the latter is the weaker and more natural assumption. We then proceed to study the smoothness constant with respect to the $\ell_\infty$norm and thereby isolate geometric properties of the objective function which affect the performance of signbased methods. In short, we find signbased methods to be preferable over gradient descent if (i) the Hessian is to some degree concentrated on its diagonal, and (ii) its maximal eigenvalue is much larger than the average eigenvalue. Both properties are common in deep networks.
 Publication:

arXiv eprints
 Pub Date:
 February 2020
 DOI:
 10.48550/arXiv.2002.08056
 arXiv:
 arXiv:2002.08056
 Bibcode:
 2020arXiv200208056B
 Keywords:

 Computer Science  Machine Learning;
 Statistics  Machine Learning