Information theoretic study of the neural geometry induced by category learning
Abstract
Categorization is an important topic both for biological and artificial neural networks. Here, we take an information theoretic approach to assess the efficiency of the representations induced by category learning. We show that one can decompose the relevant Bayesian cost into two components, one for the coding part and one for the decoding part. Minimizing the coding cost implies maximizing the mutual information between the set of categories and the neural activities. We analytically show that this mutual information can be written as the sum of two terms that can be interpreted as (i) finding an appropriate representation space, and, (ii) building a representation with the appropriate metrics, based on the neural Fisher information on this space. One main consequence is that category learning induces an expansion of neural space near decision boundaries. Finally, we provide numerical illustrations that show how Fisher information of the coding neural population aligns with the boundaries between categories.
- Publication:
-
arXiv e-prints
- Pub Date:
- November 2023
- DOI:
- 10.48550/arXiv.2311.15682
- arXiv:
- arXiv:2311.15682
- Bibcode:
- 2023arXiv231115682B
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Information Theory;
- Quantitative Biology - Neurons and Cognition
- E-Print:
- 7 pages, 2 figures, Accepted (Oral) to InfoCog@NeurIPS 2023