Long short-term memory and Learning-to-learn in networks of spiking neurons

G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass


The brain carries out demanding computations and learning processes with recurrent networks of spiking neurons (RSNNs). But computing and learning capabilities of currently available RSNN models have remained poor, especially in comparison with the performance of recurrent networks of artificial neurons, such as Long Short-Term Memory (LSTM) networks. in this article, we investigate whether deep learning can improve RSNN performance. We applied backpropagation through time (BPTT), augmented by biologically inspired heuristics for synaptic rewiring, to RSNNs whose inherent time constants were enriched through simple models for adapting spiking neurons. We found that the resulting RSNNs approximate, for the first time, the computational power of LSTM networks on two common benchmark tasks. Furthermore, our results show that recent successes with applications of Learning-to-Learn (L2L) to LSTM networks can be ported to RSNNs. This opens the door to the investigation of L2L in data-based models for neural networks of the brain, whose activitiy can - unlike that of LSTM networks - be compared directly with recordings from neurons in the brain. In particular, L2L shows that RSNNs can learn large families of non-linear transformations from very few examples, using previously unknown network learning mechanisms. Furthermore, meta-reinforcement learning (meta-RL) shows that LSNNs can learn and execute complex exploration and exploitation strategies.

Reference: G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass. Long short-term memory and learning-to-learn in networks of spiking neurons. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montreal, Canada, 2018.