Understanding training and generalization in deep learning by Fourier analysis
Abstract
Background: It is still an open research area to theoretically understand why Deep Neural Networks (DNNs)---equipped with many more parameters than training data and trained by (stochastic) gradient-based methods---often achieve remarkably low generalization error. Contribution: We study DNN training by Fourier analysis. Our theoretical framework explains: i) DNN with (stochastic) gradient-based methods often endows low-frequency components of the target function with a higher priority during the training; ii) Small initialization leads to good generalization ability of DNN while preserving the DNN's ability to fit any function. These results are further confirmed by experiments of DNNs fitting the following datasets, that is, natural images, one-dimensional functions and MNIST dataset.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2018
- DOI:
- 10.48550/arXiv.1808.04295
- arXiv:
- arXiv:1808.04295
- Bibcode:
- 2018arXiv180804295X
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Artificial Intelligence;
- Mathematics - Optimization and Control;
- Mathematics - Statistics Theory;
- Statistics - Machine Learning;
- 68Q32;
- 68T01;
- I.2.6
- E-Print:
- 10 pages, 4 figures