A solution to the learning dilemma for recurrent networks of spiking
Recurrently connected networks of spiking neurons underlie the astounding
information processing capabilities of the brain. But in spite of extensive
research, it has remained open how learning through synaptic plasticity could
be organized in such networks. We argue that two pieces of this puzzle were
provided by experimental data from neuroscience. A new mathematical insight
tells us how they need to be combined to enable network learning through
gradient descent. The resulting learning method called e-prop approaches the
performance of BPTT (backpropagation through time), the best known
method for training recurrent neural networks in machine learning. But in
contrast to BPTT, e-prop is biologically plausible. In addition, it
elucidates how brain-inspired new computer chips that are drastically more
energy ef ficient can be enabled to learn.
Reference: G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj,
R. Legenstein, and W. Maass.
A solution to the learning dilemma for recurrent networks of spiking neurons.
bioRxiv/org/10.1101/738385, August 2019.