Current State and Future Directions for Learning in Biological Recurrent
Neural Networks: A Perspective Piece
L. Y. Prince, R. H. Eyono, E. Boven, A. Ghosh, J. Pemberton, F. Scherr,
C. Clopath, R. P. Costa, W. Maass, B. A. Richards, C. Savin, and
K. A. Wilmes
Abstract:
This perspective piece came about through the Generative Adversarial
Collaboration (GAC) series of workshops organized by the Computational
Cognitive Neuroscience (CCN) conference in 2020. We brought together a number
of experts from the field of theoretical neuroscience to debate emerging
issues in our understanding of how learning is implemented in biological
recurrent neural networks. Here, we will give a brief review of the common
assumptions about biological learning and the corresponding findings from
experimental neuroscience and contrast them with the efficiency of
gradient-based learning in recurrent neural networks commonly used in
artificial intelligence. We will then outline the key issues discussed in the
workshop: synaptic plasticity, neural circuits, theory-experiment divide, and
objective functions. Finally, we conclude with recommendations for both
theoretical and experimental neuroscientists when designing new studies that
could help to bring clarity to these issues.
Reference: L. Y. Prince, R. H. Eyono, E. Boven, A. Ghosh, J. Pemberton,
F. Scherr, C. Clopath, R. P. Costa, W. Maass, B. A. Richards, C. Savin, and
K. A. Wilmes.
Current state and future directions for learning in biological recurrent
neural networks: A perspective piece.
Neurons, Behavior, Data analysis, and Theory, 1, 2022.