Are All Layers Created Equal?
Abstract
Understanding deep neural networks is a major research objective with notable experimental and theoretical attention in recent years. The practical success of excessively large networks underscores the need for better theoretical analyses and justifications. In this paper we focus on layer-wise functional structure and behavior in overparameterized deep models. To do so, we study empirically the layers' robustness to post-training re-initialization and re-randomization of the parameters. We provide experimental results which give evidence for the heterogeneity of layers. Morally, layers of large deep neural networks can be categorized as either "robust" or "critical". Resetting the robust layers to their initial values does not result in adverse decline in performance. In many cases, robust layers hardly change throughout training. In contrast, re-initializing critical layers vastly degrades the performance of the network with test error essentially dropping to random guesses. Our study provides further evidence that mere parameter counting or norm calculations are too coarse in studying generalization of deep models, and "flatness" and robustness analysis of trained models need to be examined while taking into account the respective network architectures.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2019
- DOI:
- 10.48550/arXiv.1902.01996
- arXiv:
- arXiv:1902.01996
- Bibcode:
- 2019arXiv190201996Z
- Keywords:
-
- Statistics - Machine Learning;
- Computer Science - Artificial Intelligence;
- Computer Science - Machine Learning
- E-Print:
- JMLR 2022, 28 pages, 21 figures