Local prediction-learning in high-dimensional spaces enables neural
networks to plan
C. Stoeckl, Y. Yang, and W. Maass
Abstract:
Planning and problem solving are cornerstones of higher brain function. But we
do not know how the brain does that. We show that learning of a suitable
cognitive map of the problem space suffices. Furthermore, this can be reduced
to learning to predict the next observation through local synaptic
plasticity. Importantly, the resulting cognitive map encodes relations
between actions and observations, and its emergent high-dimensional geometry
provides a sense of direction for reaching distant goals. This
quasi-Euclidean sense of direction provides a simple heuristic for online
planning that works almost as well as the best offline planning algorithms
from AI. If the problem space is a physical space, this method automatically
extracts structural regularities from the sequence of observations that it
receives so that it can generalize to unseen parts. This speeds up learning
of navigation in 2D mazes and the locomotion with complex actuator systems,
such as legged bodies. The cognitive map learner that we propose does not
require a teacher, similar to self-attention networks (Transformers). But in
contrast to Transformers, it does not require backpropagation of errors or
very large datasets for learning. Hence it provides a blue-print for future
energy-efficient neuromorphic hardware that acquires advanced cognitive
capabilities through autonomous on-chip learning.
Reference: C. Stoeckl, Y. Yang, and W. Maass.
Local prediction-learning in high-dimensional spaces enables neural networks
to plan.
Nature Communications, 15, 03 2024.