Geometric and Topological Inference for Deep Representations of Complex Networks
Abstract
Understanding the deep representations of complex networks is an important step of building interpretable and trustworthy machine learning applications in the age of internet. Global surrogate models that approximate the predictions of a black box model (e.g. an artificial or biological neural net) are usually used to provide valuable theoretical insights for the model interpretability. In order to evaluate how well a surrogate model can account for the representation in another model, we need to develop inference methods for model comparison. Previous studies have compared models and brains in terms of their representational geometries (characterized by the matrix of distances between representations of the input patterns in a model layer or cortical area). In this study, we propose to explore these summary statistical descriptions of representations in models and brains as part of a broader class of statistics that emphasize the topology as well as the geometry of representations. The topological summary statistics build on topological data analysis (TDA) and other graph-based methods. We evaluate these statistics in terms of the sensitivity and specificity that they afford when used for model selection, with the goal to relate different neural network models to each other and to make inferences about the computational mechanism that might best account for a black box representation. These new methods enable brain and computer scientists to visualize the dynamic representational transformations learned by brains and models, and to perform model-comparative statistical inference.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2022
- DOI:
- 10.48550/arXiv.2203.05488
- arXiv:
- arXiv:2203.05488
- Bibcode:
- 2022arXiv220305488L
- Keywords:
-
- Computer Science - Machine Learning;
- Mathematics - Geometric Topology;
- Quantitative Biology - Neurons and Cognition
- E-Print:
- To appear in Proceeding of WWW 2022. This work extends our prior work (arXiv:1810.02923, arXiv:1906.09264, arXiv:1902.10658) and put them in perspectives along with many other ongoing research in this direction