A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware

A. Rao, P. Plank, A. Wild, and W. Maass

Abstract:

Spike-based neuromorphic hardware holds promise for more energy-efficient implementations of deep neural networks (DNNs) than standard hardware such as GPUs. But this requires us to understand how DNNs can be emulated in an event-based sparse firing regime, as otherwise the energy advantage is lost. In particular, DNNs that solve sequence processing tasks typically employ long short-term memory units that are hard to emulate with few spikes. We show that a facet of many biological neurons, slow after-hyperpolarizing currents after each spike, provides an efficient solution. After-hyperpolarizing currents can easily be implemented in neuromorphic hardware that supports multi-compartment neuron models, such as Intel’s Loihi chip. Filter approximation theory explains why after-hyperpolarizing neurons can emulate the function of long short-term memory units. This yields a highly energy-efficient approach to time-series classification. Furthermore, it provides the basis for an energy-efficient implementation of an important class of large DNNs that extract relations between words and sentences in order to answer questions about the text.



Reference: A. Rao, P. Plank, A. Wild, and W. Maass. A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware. Nature Machine Intelligence, 4:467–479, 2022.