A Spiking Neuron as Information Bottleneck
Abstract:
Neurons receive thousands of presynaptic input spike trains while emitting a
single output spike train. This drastic dimensionality reduction suggests to
consider a neuron as a bottleneck for information transmission. Extending
recent results, we propose a simple learning rule for the weights of spiking
neurons derived from the Information Bottleneck (IB) framework that minimizes
the loss of relevant information transmitted in the output spike train. In
the IB framework relevance of information is defined with respect to
contextual information, the latter entering the proposed learning rule as a
"third" factor besides pre- and postsynaptic activities. This renders the
theoretically motivated learning rule a plausible model for experimentally
observed synaptic plasticity phenomena involving three factors. Furthermore,
we show that the proposed IB learning rule allows spiking neurons to learn a
"predictive code",i.e. to extract those parts of their input that are
predictive for future input.
Reference: L. Buesing and W. Maass.
A spiking neuron as information bottleneck.
Neural Computation, 22:1961-1992, 2010.