Seminar Computational Intelligence B (708.112)

SS 2020

Institut für Grundlagen der Informationsverarbeitung (708)


Assoc. Prof. Dr. Robert Legenstein

Office hours: by appointment (via e-mail)


Location: IGI-seminar room, Inffeldgasse 16b/I, 8010 Graz
Date: starting on Tuesday, March 10 2020, 13:15 - 15.00

Content of the seminar: Training Spiking Neural Networks

"In recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and arguably the only viable option if one wants to understand how the brain computes. SNNs are also more hardware friendly and energy-efficient than ANNs, and are thus appealing for technology, especially for portable devices. However, training deep SNNs remains a challenge. Spiking neurons’ transfer function is usually non-differentiable, which prevents using backpropagation." [Tavanaei et al. Deep Learning in Spiking Neural Networks. arXiv 2018.].

Spiking neural networks are an important alternative to artificial neural networks, in particular in view of future low-energy hardware implementations of neural networks. In this seminar, we will discuss training methods for spiking neural networks, with an emphasis on supervised training and with a brief outlook on spiking neuromorphic hardware.

How to prepare and hold your talk:

The guide presented in the seminar: How to prepare and hold your talk


Date # Topic / paper title Presenter 1 Presentation
28.4. 1 Intro Spiking neuron models. Chapter 1 and 6 from Gerstner et al. 2014 online Lopes Dias
2 Spike timing–dependent plasticity: a Hebbian learning PDF Reichel
12.5. 3 Gradient descent for spiking neural networks PDF Pernull
26.5. 4 SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks PDF Maiti
5 Conversion of continuous-valued deep networks to efficient event-driven networks for image classification PDF Kumar
9.6. 6 A solution to the learning dilemma for recurrent networks of spiking neurons PDF Renner



Gerstner, W. et al. (1997). Neural Dynamics. Chapters 1 and 6, on the Integrate-and-Fire and the Spike Response neuron model. online


Maass, W. (1997). Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9), 1659-1671. PDF


Caporale, N., & Dan, Y. (2008). Spike timing–dependent plasticity: a Hebbian learning rule. Annu. Rev. Neurosci., 31, 25-46. PDF



Single layer unsupervised

Song, S., Miller, K. D., & Abbott, L. F. (2000). Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature neuroscience, 3(9), 919. PDF


Masquelier, T., & Thorpe, S. J. (2007). Unsupervised learning of visual features through spike timing dependent plasticity. PLoS computational biology, 3(2), e31. PDF



Single layer supervised

Pfister, J. P., Toyoizumi, T., Barber, D., & Gerstner, W. (2006). Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural computation, 18(6), 1318-1348. PDF


Ponulak, F., & Kasiński, A. (2010). Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural computation, 22(2), 467-510. PDF


Gütig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing–based decisions. Nature neuroscience, 9(3), 420. PDF


Florian, R. V. (2012). The chronotron: a neuron that learns to fire temporally precise spike patterns. PloS one, 7(8), e40233. PDF



Multilayer supervised

Bohte, S. M., Kok, J. N., & La Poutre, H. (2002). Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(1-4), 17-37. PDF


Booij, O., & tat Nguyen, H. (2005). A gradient descent rule for spiking neurons emitting multiple spikes. Information Processing Letters, 95(6), 552-558. PDF


Lee, J. H., Delbruck, T., & Pfeiffer, M. (2016). Training deep spiking neural networks using backpropagation. Frontiers in neuroscience, 10, 508. PDF

Zenke, F., & Ganguli, S. (2018). SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks. Neural computation, 30(6), 1514-1541. PDF


Huh, D., & Sejnowski, T. J. (2017). Gradient descent for spiking neural networks. In Advances in Neural Information Processing Systems (pp. 1433-1443). PDF


Mostafa, H. (2018). Supervised learning based on temporal coding in spiking neural networks. IEEE transactions on neural networks and learning systems, 29(7), 3227-3235. PDF


Wu, Y., Deng, L., Li, G., Zhu, J., & Shi, L. (2018). Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in neuroscience, 12. PDF


Tavanaei, A., & Maida, A. S. (2017). Bp-stdp: Approximating backpropagation using spike timing dependent plasticity. arXiv preprint arXiv:1711.04214. PDF


Bellec, G., Salaj, D., Subramoney, A., Legenstein, R., & Maass, W. (2018). Long short-term memory and Learning-to-learn in networks of spiking neurons. In Advances in Neural Information Processing Systems (pp. 787-797). PDF


G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. Legenstein, and W. Maass (2019). A solution to the learning dilemma for recurrent networks of spiking neurons. bioRxiv/org/10.1101/738385v3. PDF




O'Connor, P., & Welling, M. (2016). Deep spiking networks. arXiv preprint arXiv:1602.08323. PDF




Diehl, P. U., Neil, D., Binas, J., Cook, M., Liu, S. C., & Pfeiffer, M. (2015, July). Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Neural Networks (IJCNN), 2015 International Joint Conference on (pp. 1-8). IEEE. PDF


Rueckauer, B., Lungu, I. A., Hu, Y., Pfeiffer, M., & Liu, S. C. (2017). Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in neuroscience, 11, 682. PDF


Esser, S.k. et al. (2016) Convolutional networks for fast, energy-efficient neuromorphic computing. PNAS, 113(41), 11441-11446. PDF

see also:

Esser, S. K., Appuswamy, R., Merolla, P., Arthur, J. V., & Modha, D. S. (2015). Backpropagation for energy-efficient neuromorphic computing. In Advances in Neural Information Processing Systems (pp. 1117-1125). PDF

Talks should be not longer than 35 minutes, and be clear, interesting and informative, rather than a reprint of the material. Select what parts of the material you want to present, and what not, and then present the selected material well (including definitions not given in the material: look them up on the web or if that is not successful, ask the seminar organizers). Often diagrams or figures are useful for a talk. on the other hand, giving in the talk numbers of references that are listed at the end is a no-no (a talk is an online process, not meant to be read). For the same reasons you can also quickly repeat earlier definitions or so if you suspect that the audience may not remember it.

Talks will be assigned at the first seminar meeting on March 10.  Students are requested to have a quick glance at the papers prior to this meeting in order to determine their preferences.

General rules:

Participation in the seminar meetings is obligatory. We also request your courtesy and attention for the seminar speaker: no smartphones, laptops, etc during a talk. Furthermore your active attention, questions, and discussion contributions are expected.

After your talk (and possibly some corrections) send pdf of your talk to Charlotte Rumpf, who will post it on the seminar webpage.