Distributed Bayesian computation and self-organized learning in sheets
of spiking neurons with local lateral inhibition
J. Bill, L. Buesing, S. Habenschuss, B. Nessler, W. Maass, and
R. Legenstein
Abstract:
During the last decade, Bayesian probability theory has emerged as a framework
in cognitive science and neuroscience for describing perception, reasoning
and learning of mammals. However, our understanding of how probabilistic
computations could be organized in the brain, and how the observed
connectivity structure of cortical microcircuits supports these calculations,
is rudimentary at best. In this study, we investigate statistical inference
and self-organized learning in a spatially extended spiking network model,
that accommodates both local competitive and large-scale associative aspects
of neural information processing, under a unified Bayesian account.
Specifically, we show how the spiking dynamics of a recurrent network with
lateral excitation and local inhibition in response to distributed spiking
input, can be understood as sampling from a variational posterior
distribution of a well-defined implicit probabilistic model. This
interpretation further permits a rigorous analytical treatment of
experience-dependent plasticity on the network level. Using machine learning
theory, we derive update rules for neuron and synapse parameters which equate
with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural
implementation. In computer simulations, we demonstrate that the interplay of
these plasticity rules leads to the emergence of probabilistic local experts
that form distributed assemblies of similarly tuned cells communicating
through lateral excitatory connections. The resulting sparse distributed
spike code of a well-adapted network carries compressed information on
salient input features combined with prior experience on correlations among
them. Our theory predicts that the emergence of such efficient
representations benefits from network architectures in which the range of
local inhibition matches the spatial extent of pyramidal cells that share
common afferent input.
Reference: J. Bill, L. Buesing, S. Habenschuss, B. Nessler, W. Maass, and
R. Legenstein.
Distributed Bayesian computation and self-organized learning in sheets of
spiking neurons with local lateral inhibition.
PLOS ONE, 10(8):e0134356, 2015.