Comparing dynamics: deep neural networks versus glassy systems
Abstract
We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are (1) the complexity of the loss landscape and of the dynamics within it, and (2) to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and datasets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large times, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, showing that the statistical properties of the corresponding loss and energy landscapes are different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized.
This article is an updated version of a paper presented at ICML 2018, and is present in the following informal collection of proceedings: 2018 Proceedings Of Machine Learning Research 80 324-333.- Publication:
-
Journal of Statistical Mechanics: Theory and Experiment
- Pub Date:
- December 2019
- DOI:
- 10.1088/1742-5468/ab3281
- arXiv:
- arXiv:1803.06969
- Bibcode:
- 2019JSMTE..12.4013B
- Keywords:
-
- Statistics - Machine Learning;
- Condensed Matter - Disordered Systems and Neural Networks;
- Computer Science - Machine Learning
- E-Print:
- 10 pages, 5 figures. Version accepted at ICML 2018