One-shot learning with spiking neural networks
F. Scherr, C. Stoeckl, and W. Maass
Abstract:
Understanding how one-shot learning can be accomplished through synaptic
plasticity in neural networks of the brain is a major open problem. We
propose that approximations to BPTT in recurrent networks of spiking neurons
(RSNNs) such as e-prop cannot achieve this because their local synaptic
plasticity is gated by learning signals that are rather ad hoc from a
biological perspective: Random projections of instantaneously arising losses
at the network outputs, analogously as in Broadcast Alignment for feedforward
networks. In contrast, synaptic plasticity is gated in the brain by learning
signals such as dopamine, which are emitted by specialized brain areas, e.g.
VTA. These brain areas have arguably been optimized by evolution to gate
synaptic plasticity in such a way that fast learning of survival-relevant
tasks is enabled. We found that a corresponding model architecture, where
learning signals are emitted by a seperate RSNN that is optimized to
facilitate fast learning, enables one-shot learning via local synaptic
plasticity in RSNNs for large families of learning tasks. The same learning
approach also supports fast spike-based learning of posterior probabilities
of potential input sources, thereby providing a new basis for probabilistic
reasoning in RSNNs. Our new learning approach also solves an open problem in
neuromorphic engineering, where on-chip one-shot learning capability is higly
desirable for spike-based neuromorphic devices, but could so far not be
achieved. Our method can easily be mapped into neuromorphic hardware, and
thereby solves this problem.
Reference: F. Scherr, C. Stoeckl, and W. Maass.
One-shot learning with spiking neural networks.
bioRxiv, 2020.