Replacing supervised classification learning by Slow Feature
Analysis in spiking neural networks
Abstract:
Many models for computations in recurrent networks of neurons assume that the
network state moves from some initial state to some fixed point attractor or
limit cycle that represents the output of the computation. However
experimental data show that in response to a sensory stimulus the network
state moves from its initial state through a trajectory of network states and
eventually returns to the initial state, without reaching an attractor or
limit cycle in between. This type of network response, where salient
information about external stimuli is encoded in characteristic trajectories
of continuously varying network states, raises the question how a neural
system could compute with such code, and arrive for example at a temporally
stable classification of the external stimulus. We show that a known
unsupervised learning algorithm, Slow Feature Analysis (SFA), could be an
important ingredient for extracting stable information from these network
trajectories. In fact, if sensory stimuli are more often followed by another
stimulus from the same class than by a stimulus from another class, SFA
approaches the classification capability of Fisher’s Linear Discriminant
(FLD), a powerful algorithm for supervised learning. We apply this principle
to simulated cortical microcircuits, and show that it enables readout neurons
to learn discrimination of spoken digits and detection of repeating firing
patterns within a stream of spike trains with the same firing statistics,
without requiring any supervision for learning.
Reference: S. Klampfl and W. Maass.
Replacing supervised classification learning by Slow Feature Analysis in
spiking neural networks.
In Proc. of NIPS 2009: Advances in Neural Information Processing
Systems, volume 22, pages 988-996. MIT Press, 2010.