A criterion for the convergence of learning with spike timing dependent plasticity

R. Legenstein and W. Maass

Abstract:

We investigate under what conditions a neuron can learn by experimentally supported rules for spike timing dependent plasticity (STDP) to predict the arrival times of strong ``teacher inputs'' to the same neuron. It turns out that in contrast to the famous Perceptron Convergence Theorem, which predicts convergence of the perceptron learning rule for a strongly simplified neuron model whenever a stable solution exists, no equally strong convergence guarantee can be given for spiking neurons with STDP. But we derive a criterion on the statistical dependency structure of input spike trains which characterizes exactly when learning with STDP will converge on average for a simple model of a spiking neuron. This criterion is reminiscent of the linear separability criterion of the Perceptron Convergence Theorem, but it applies here to the rows of a correlation matrix related to the spike inputs. In addition we show through computer simulations for more realistic neuron models that the resulting analytically predicted positive learning results not only hold for the common interpretation of STDP where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.



Reference: R. Legenstein and W. Maass. A criterion for the convergence of learning with spike timing dependent plasticity. In Y. Weiss, B. Schoelkopf, and J. Platt, editors, Advances in Neural Information Processing Systems, volume 18, pages 763-770. MIT Press, 2006.