Principal Components Bias in Deep Neural Networks
Abstract
Recent work suggests that convolutional neural networks of different architectures learn to classify images in the same order. To understand this phenomenon, we revisit the overparametrized deep linear network model. Our asymptotic analysis, assuming that the hidden layers are wide enough, reveals that the convergence rate of this model's parameters is exponentially faster along directions corresponding to the larger principal components of the data, at a rate governed by the singular values. We term this convergence pattern the Principal Components bias (PCbias). We show how the PCbias streamlines the order of learning of both linear and nonlinear networks, more prominently at earlier stages of learning. We then compare our results to the simplicity bias, showing that both biases can be seen independently, and affect the order of learning in different ways. Finally, we discuss how the PCbias may explain some benefits of early stopping and its connection to PCA, and why deep networks converge more slowly when given random labels.
 Publication:

arXiv eprints
 Pub Date:
 May 2021
 arXiv:
 arXiv:2105.05553
 Bibcode:
 2021arXiv210505553H
 Keywords:

 Computer Science  Machine Learning