Local plasticity rules can learn deep representations using self-supervised contrastive predictions
Abstract
Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown. Here, we propose a learning rule that takes inspiration from neuroscience and recent advances in self-supervised deep learning. Learning minimizes a simple layer-specific loss function and does not need to back-propagate error signals within or between layers. Instead, weight updates follow a local, Hebbian, learning rule that only depends on pre- and post-synaptic neuronal activity, predictive dendritic input and widely broadcasted modulation factors which are identical for large groups of neurons. The learning rule applies contrastive predictive learning to a causal, biological setting using saccades (i.e. rapid shifts in gaze direction). We find that networks trained with this self-supervised and local rule build deep hierarchical representations of images, speech and video.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2020
- DOI:
- 10.48550/arXiv.2010.08262
- arXiv:
- arXiv:2010.08262
- Bibcode:
- 2020arXiv201008262I
- Keywords:
-
- Computer Science - Neural and Evolutionary Computing;
- Computer Science - Artificial Intelligence;
- Computer Science - Hardware Architecture;
- Computer Science - Computer Vision and Pattern Recognition;
- Computer Science - Machine Learning