Sign and Relevance Learning
Abstract
Standard models of biologically realistic or biologically inspired reinforcement learning employ a global error signal, which implies the use of shallow networks. On the other hand, error backpropagation allows the use of networks with multiple layers. However, precise error backpropagation is difficult to justify in biologically realistic networks because it requires precise weighted error backpropagation from layer to layer. In this study, we introduce a novel network that solves this problem by propagating only the sign of the plasticity change (i.e., LTP/LTD) throughout the whole network, while neuromodulation controls the learning rate. Neuromodulation can be understood as a rectified error or relevance signal, while the top-down sign of the error signal determines whether long-term potentiation or long-term depression will occur. To demonstrate the effectiveness of this approach, we conducted a real robotic task as proof of concept. Our results show that this paradigm can successfully perform complex tasks using a biologically plausible learning mechanism.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2021
- DOI:
- 10.48550/arXiv.2110.07292
- arXiv:
- arXiv:2110.07292
- Bibcode:
- 2021arXiv211007292D
- Keywords:
-
- Computer Science - Machine Learning;
- Quantitative Biology - Neurons and Cognition
- E-Print:
- 14 pages, 15 figures