Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Abstract
In this paper we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the number of observations are fewer than the number of parameters in the model. We show that with quadratic activations the optimization landscape of training such shallow neural networks has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.
- Publication:
-
arXiv e-prints
- Pub Date:
- July 2017
- DOI:
- arXiv:
- arXiv:1707.04926
- Bibcode:
- 2017arXiv170704926S
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Information Theory;
- Mathematics - Optimization and Control;
- Statistics - Machine Learning
- E-Print:
- A mistake in the argument of Proposition 7.1 in the previous version of this manuscript was fixed