S. Schmitt, J. Klähn, G. Bellec, A. Grübl, M. Güttler, A. Hartel,
S. Hartmann, D. Husmann, K. Husmann, S. Jeltsch, V. Karasenko, M. Kleider,
C. Koke, A. Kononov, C. Mauch, E. Müller, P. Müller, J. Partzsch,
M. A. Petroviciy, S. Schiefer, S. Scholze, V. Thanasoulis, B. Vogginger,
R. Legenstein, W. Maass, C. Mayr, R. Schüffny, J. Schemmel, and K. Meier
Emulating spiking neural networks on analog neuromorphic hardware offers
several advantages over simulating them on conventional computers,
particularly in terms of speed and energy consumption. However, this usually
comes at the cost of reduced control over the dynamics of the emulated
networks. In this paper, we demonstrate how iterative training of a
hardware-emulated network can compensate for anomalies induced by the analog
substrate. We first convert a deep neural network trained in software to a
spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby
enabling an acceleration factor of 10 000 compared to the biological time
domain. This mapping is followed by the in-the-loop training, where in each
training step, the network activity is first recorded in hardware and then
used to compute the parameter updates in software via backpropagation. An
essential finding is that the parameter updates do not have to be precise,
but only need to approximately follow the correct gradient, which simplifies
the computation of updates. Using this approach, after only several tens of
iterations, the spiking network shows an accuracy close to the ideal
software-emulated prototype. The presented techniques show that deep spiking
networks emulated on analog neuromorphic devices can attain good
computational performance despite the inherent variations of the analog
substrate.