Simplified Rules and Theoretical Analysis for Information Bottleneck
Optimization and PCA with Spiking Neurons
We show that under suitable assumptions (primarily linearization) a simple and
perspicuous online learning rule for Information Bottleneck optimization with
spiking neurons can be derived. This rule performs on common benchmark tasks
as well as a rather complex rule that has previously been proposed .
Furthermore, the transparency of this new learning rule makes a theoretical
analysis of its convergence properties feasible. If this learning rule is
applied to an assemble of neurons, it provides a theoretically founded method
for performing principal component analysis (PCA) with spiking neurons. In
addition it makes it possible to preferentially extract those principal
components from incoming signals X that are related to some additional target
. This target signal
(also called relevance variable)
could represent in a biological interpretation proprioception feedback, input
from other sensory modalities, or top-down signals.
Reference: L. Buesing and W. Maass.
Simplified rules and theoretical analysis for information bottleneck
optimization and PCA with spiking neurons.
In Proc. of NIPS 2007, Advances in Neural Information Processing
Systems, volume 20. MIT Press, 2008.