D. Kappel, S. Habenschuss, R. Legenstein, and W. Maass
We reexamine in this article the conceptual and mathematical framework for
understanding the organization of plasticity in spiking neural networks. We
propose that inherent stochasticity enables synaptic plasticity to carry out
probabilistic inference by sampling from a posterior distribution of synaptic
parameters. This view provides a viable alternative to existing models that
propose convergence of synaptic weights to maximum likelihood parameters. It
explains how priors on weight distributions and connection probabilities can
be merged optimally with learned experience. In simulations we show that our
model for synaptic plasticity allows spiking neural networks to compensate
continuously for unforeseen disturbances. Furthermore it provides a normative
mathematical framework to better understand the permanent variability and
rewiring observed in brain networks.