A Linear Frequency Principle Model to Understand the Absence of Overfitting in Neural Networks
Abstract
Why heavily parameterized neural networks (NNs) do not overfit the data is an important long standing open question. We propose a phenomenological model of the NN training to explain this non-overfitting puzzle. Our linear frequency principle (LFP) model accounts for a key dynamical feature of NNs: they learn low frequencies first, irrespective of microscopic details. Theory based on our LFP model shows that low frequency dominance of target functions is the key condition for the non-overfitting of NNs and is verified by experiments. Furthermore, through an ideal two-layer NN, we unravel how detailed microscopic NN training dynamics statistically gives rise to an LFP model with quantitative prediction power. Supported by the National Key R&D Program of China (Grant No. 2019YFA0709503), the Shanghai Sailing Program, the Natural Science Foundation of Shanghai (Grant No. 20ZR1429000), the National Natural Science Foundation of China (Grant No. 62002221), Shanghai Municipal of Science and Technology Project (Grant No. 20JC1419500), and the HPC of School of Mathematical Sciences at Shanghai Jiao Tong University
- Publication:
-
Chinese Physics Letters
- Pub Date:
- March 2021
- DOI:
- arXiv:
- arXiv:2102.00200
- Bibcode:
- 2021ChPhL..38c8701Z
- Keywords:
-
- Computer Science - Machine Learning;
- Physics - Data Analysis;
- Statistics and Probability
- E-Print:
- to appear in Chinese Physics Letters