Wide neural networks of any depth evolve as linear models under gradient descent
Abstract
A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks (NNs) have made a theory of learning dynamics elusive. In this work, we show that for wide NNs the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian NNs and Gaussian processes (GPs), gradient-based training of wide NNs with a squared loss produces test set predictions drawn from a GP with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions. *This article is an updated version of a paper presented at 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
- Publication:
-
Journal of Statistical Mechanics: Theory and Experiment
- Pub Date:
- December 2020
- DOI:
- 10.1088/1742-5468/abc62b
- arXiv:
- arXiv:1902.06720
- Bibcode:
- 2020JSMTE2020l4002L
- Keywords:
-
- machine learning;
- Statistics - Machine Learning;
- Computer Science - Machine Learning
- E-Print:
- 12+16 pages