Measuring the generalization performance of a Deep Neural Network (DNN) without relying on a validation set is a difficult task. In this work, we propose exploiting Latent Geometry Graphs (LGGs) to represent the latent spaces of trained DNN architectures. Such graphs are obtained by connecting samples that yield similar latent representations at a given layer of the considered DNN. We then obtain a generalization score by looking at how strongly connected are samples of distinct classes in LGGs. This score allowed us to rank 3rd on the NeurIPS 2020 Predicting Generalization in Deep Learning (PGDL) competition.
- Pub Date:
- November 2020
- Computer Science - Machine Learning
- Short paper describing submission that got the 3rd place on the NeurIPS 2020 Predicting Generalization in Deep Learning (PGDL) competition. We hope to update this with more analysis when the full data is made available