On the complexity of learning for spiking neurons with temporal coding
Abstract:
Spiking neurons are models for the computational units in biological neural
systems where information is considered to be encoded mainly in the temporal
patterns of their activity. In a network of spiking neurons a new set of
parameters becomes relevant which has no counterpart in traditional neural
network models: the time that a pulse needs to travel through a connection
between two neurons (also known as delay of a connection). It is knoen that
these delays are tuned in biological neural systems through a variety of
mechanism. In this article we consider the arguably most simple model for a
spikeing neuron, which can also easily be implemented in pulsed VLSI. We
investigate the Vapnik-Chervonenkis (VC) dimension of networks of
spiking neurons, where the delays are viewed as programmable parameters and
we prove tight bounds for this VC dimension. Thus, we get quantitative
estimates for the diversity of functions that a network with fixed
architecture can compute with different settings of its delays. In
particular, it turns out that a network of spiking neurons with k adjustable
delays is able to compute a much richer class of functions than a threshold
circuit with k adjustable weights. The results also yield bounds for the
number of training examples that an algorithm needs for tuning the delays of
a network of spiking neurons. Results about the computational complexity of
such algorithms are also given.
Reference: W. Maass and M. Schmitt.
On the complexity of learning for spiking neurons with temporal coding.
Information and Computation, 153:26-46, 1999.