Onepass Stochastic Gradient Descent in Overparametrized Twolayer Neural Networks
Abstract
There has been a recent surge of interest in understanding the convergence of gradient descent (GD) and stochastic gradient descent (SGD) in overparameterized neural networks. Most previous works assume that the training data is provided a priori in a batch, while less attention has been paid to the important setting where the training data arrives in a stream. In this paper, we study the streaming data setup and show that with overparamterization and random initialization, the prediction error of twolayer neural networks under onepass SGD converges in expectation. The convergence rate depends on the eigendecomposition of the integral operator associated with the socalled neural tangent kernel (NTK). A key step of our analysis is to show a random kernel function converges to the NTK with high probability using the VC dimension and McDiarmid's inequality.
 Publication:

arXiv eprints
 Pub Date:
 May 2021
 arXiv:
 arXiv:2105.00262
 Bibcode:
 2021arXiv210500262X
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning;
 Mathematics  Optimization and Control