Hebbian learning of Bayes optimal decisions
B. Nessler, M. Pfeiffer, and W. Maass
When we perceive our environment, make a decision, or take an action, our brain
has to deal with multiple sources of uncertainty. The Bayesian framework of
statistical estimation provides computational methods for dealing optimally
with uncertainty. Bayesian inference however is algorithmically quite
complex, and learning of Bayesian inference involves the storage and updating
of probability tables or other data structures that are hard to implement in
neural networks. Hence it is unclear how our nervous system could acquire the
capability to approximate optimal Bayesian inference and action selection.
This article shows that the simplest and experimentally best supported type
of synaptic plasticity, Hebbian learning, can in principle achieve this. Even
inference in complex Bayesian networks can be approximated by Hebbian
learning in combination with population coding and lateral inhibition
(``Winner-Take-All'') in cortical microcircuits that produce a sparse
encoding of complex sensory stimuli. We also show that a corresponding
reward-modulated Hebbian plasticity rule provides a principled framework for
understanding how Bayesian inference could support fast reinforcement
learning in the brain. In particular we show that recent experimental results
by Yang and Shadlen on reinforcement learning of probabilistic inference in
primates can be modeled in this way.
Reference: B. Nessler, M. Pfeiffer, and W. Maass.
Hebbian learning of Bayes optimal decisions.
In Proc. of NIPS 2008: Advances in Neural Information Processing
Systems, 21, 2009.