Dimensionality reduction, regularization, and generalization in overparameterized regressions
Abstract
Overparameterization in deep learning is powerful: Very large models fit the training data perfectly and yet often generalize well. This realization brought back the study of linear models for regression, including ordinary least squares (OLS), which, like deep learning, shows a "doubledescent" behavior: (1) The risk (expected outofsample prediction error) can grow arbitrarily when the number of parameters $p$ approaches the number of samples $n$, and (2) the risk decreases with $p$ for $p>n$, sometimes achieving a lower value than the lowest risk for $p<n$. The divergence of the risk for OLS can be avoided with regularization. In this work, we show that for some data models it can also be avoided with a PCAbased dimensionality reduction (PCAOLS, also known as principal component regression). We provide nonasymptotic bounds for the risk of PCAOLS by considering the alignments of the population and empirical principal components. We show that dimensionality reduction improves robustness while OLS is arbitrarily susceptible to adversarial attacks, particularly in the overparameterized regime. We compare PCAOLS theoretically and empirically with a wide range of projectionbased methods, including random projections, partial least squares (PLS), and certain classes of linear twolayer neural networks. These comparisons are made for different data generation models to assess the sensitivity to signaltonoise and the alignment of regression coefficients with the features. We find that methods in which the projection depends on the training data can outperform methods where the projections are chosen independently of the training data, even those with oracle knowledge of population quantities, another seemingly paradoxical phenomenon that has been identified previously. This suggests that overparameterization may not be necessary for good generalization.
 Publication:

arXiv eprints
 Pub Date:
 November 2020
 arXiv:
 arXiv:2011.11477
 Bibcode:
 2020arXiv201111477H
 Keywords:

 Statistics  Machine Learning;
 Computer Science  Machine Learning
 EPrint:
 SIAM Journal on Mathematics of Data Science Vol.4 Iss.1, 2022