A solution to the learning dilemma for recurrent networks of spiking
neurons
G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. Legenstein,
and W. Maass
Abstract:
Recurrently connected networks of spiking neurons underlie the astounding
information processing capabilities of the brain. But in spite of extensive
research, it has remained open how they can learn through synaptic plasticity
to carry out complex network computations. We argue that two pieces of this
puzzle were provided by experimental data from neuroscience. A new
mathematical insight tells us how theses pieces need to be combined to enable
biologically plausible online network learning through gradient descent, in
particular deep reinforcement learning. This new learning method - called
e-prop - approaches the performance of BPTT
(backpropagation through time), the best known method for training recurrent
neural networks in machine learning. But in contrast to BPTT, e-prop is
biologically plausible. In addition, it suggests a method for powerful
on-chip learning in novel energy-efficient spike-based hardware for AI.
Reference: G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj,
R. Legenstein, and W. Maass.
A solution to the learning dilemma for recurrent networks of spiking neurons.
Nature Communications, 11:3625, 2020.