A reward-modulated Hebbian learning rule can explain experimentally
observed network reorganization in a brain control task
R. Legenstein, S. M. Chase, A. B. Schwartz, and W. Maass
It has recently been shown in a brain-computer interface experiment that motor
cortical neurons change their tuning properties selectively to compensate for
errors induced by displaced decoding parameters. In particular, it was shown
that the three-dimensional tuning curves of neurons whose decoding parameters
were reassigned changed more than those of neurons whose decoding parameters
had not been reassigned. In this article, we propose a simple learning rule
that can reproduce this effect. Our learning rule uses Hebbian weight updates
driven by a global reward signal and neuronal noise. In contrast to most
previously proposed learning rules, this approach does not require extrinsic
information to separate noise from signal. The learning rule is able to
optimize the performance of a model system within biologically realistic
periods of time under high noise levels. Furthermore, when the model
parameters are matched to data recorded during the brain-computer interface
learning experiments described above, the model produces learning effects
strikingly similar to those found in the experiments.
Reference: R. Legenstein, S. M. Chase, A. B. Schwartz, and W. Maass.
A reward-modulated Hebbian learning rule can explain experimentally observed
network reorganization in a brain control task.
The Journal of Neuroscience, 30(25):8400-8410, 2010.