On Uniform Convergence and LowNorm Interpolation Learning
Abstract
We consider an underdetermined noisy linear regression model where the minimumnorm interpolating predictor is known to be consistent, and ask: can uniform convergence in a norm ball, or at least (following Nagarajan and Kolter) the subset of a norm ball that the algorithm selects on a typical input set, explain this success? We show that uniformly bounding the difference between empirical and population errors cannot show any learning in the norm ball, and cannot show consistency for any set, even one depending on the exact algorithm and distribution. But we argue we can explain the consistency of the minimalnorm interpolator with a slightly weaker, yet standard, notion, uniform convergence of zeroerror predictors. We use this to bound the generalization error of low (but not minimal) norm interpolating predictors.
 Publication:

arXiv eprints
 Pub Date:
 June 2020
 arXiv:
 arXiv:2006.05942
 Bibcode:
 2020arXiv200605942Z
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning
 EPrint:
 27 pages