Memory-efficient Deep Learning on a SpiNNaker 2 prototype
C. Liu, G. Bellec, B. Vogginger, D. Kappel, J. Partzsch,
F. Neumärker, S. Höppner, W. Maass, S. B. Furber, R. Legenstein, and
C. G. Mayr
Abstract:
The memory requirement of deep learning algorithms is considered incompatible
with the memory restriction of energy-efficient hardware. A low memory
footprint can be achieved by pruning obsolete connections or reducing the
precision of connection strengths after the network has been trained. Yet,
these techniques are not applicable to the case when neural networks have to
be trained directly on hardware due to the hard memory constraints. Deep
rewiring (DEEP R) is a training algorithm which continuously rewires the
network while preserving very sparse connectivity all along the training
procedure. We apply DEEP R to a deep neural network implementation on a
prototype chip of the 2nd generation SpiNNaker system. The local memory of a
single core on this chip is limited to 64 KB and a deep network architecture
is trained entirely within this constraint without the use of external
memory. Throughout training, the proportion of active connections is limited
to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse
network achives 96.6% classification accuracy at convergence. Utilizing the
multi-processor feature of the SpiNNaker system, we found very good scaling
in terms of computation time, per-core memory consumption, and energy
constraints. When compared to a X86 CPU implementation, neural network
training on the SpiNNaker 2 prototype improves power and energy consumption
by two orders of magnitude.
Reference: C. Liu, G. Bellec, B. Vogginger, D. Kappel, J. Partzsch,
F. Neumärker, S. Höppner, W. Maass, S. B. Furber, R. Legenstein, and
C. G. Mayr.
Memory-efficient deep learning on a spinnaker 2 prototype.
Frontiers in Neuroscience, 2018.