Why Does Deep and Cheap Learning Work So Well?
Abstract
We show how the success of deep learning could depend not only on mathematics but also on physics: although wellknown mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial logprobability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various "noflattening theorems" showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that n variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer.
 Publication:

Journal of Statistical Physics
 Pub Date:
 September 2017
 DOI:
 10.1007/s1095501718365
 arXiv:
 arXiv:1608.08225
 Bibcode:
 2017JSP...168.1223L
 Keywords:

 Artificial neural networks;
 Deep learning;
 Statistical physics;
 Condensed Matter  Disordered Systems and Neural Networks;
 Computer Science  Machine Learning;
 Computer Science  Neural and Evolutionary Computing;
 Statistics  Machine Learning
 EPrint:
 Replaced to match version published in Journal of Statistical Physics: https://link.springer.com/article/10.1007/s1095501718365 Improved refs &