Perspectives of the High-Dimensional Dynamics of Neural Microcircuits
from the Point of View of Low-Dimensional Readouts
S. Haeusler, H. Markram, and W. Maass
Abstract:
We investigate generic models for cortical microcircuits, i.e., recurrent
circuits of integrate-and-fire neurons with dynamic synapses. These complex
dynamic systems subserve the amazing information processing capabilities of
the cortex, but are at the present time very little understood. We analyze
the transient dynamics of models for neural microcircuits from the point of
view of one or two readout neurons that collapse the high-dimensional
transient dynamics of a neural circuit into a one- or two-dimensional output
stream. This stream may for example represent the information that is
projected from such circuit to some particular other brain area or actuators.
It is shown that simple local learning rules enable a readout neuron to
extract from the high-dimensional transient dynamics of a recurrent neural
circuit quite different low-dimensional projections, which even may contain
?virtual attractors? that are not apparent in the high-dimensional dynamics
of the circuit itself. Furthermore it is demonstrated that the information
extraction capabilities of linear readout neurons are boosted by the
computational operations of a sufficiently large preceding neural
microcircuit. Hence a generic neural microcircuit may play a similar role for
information processing as a kernel for support vector machines in machine
learning. We demonstrate that the projection of time-varying inputs into a
large recurrent neural circuit enables a linear readout neuron to classify
the time-varying circuit inputs with the same power as complex nonlinear
classifiers, such as a pool of perceptrons trained by the p-delta rule or a
feedforward sigmoidal neural net trained by backprop, provided that the size
of the recurrent circuit is sufficiently large. At the same time such readout
neurons can exploit the stability and speed of learning rules for linear
classifiers, thereby overcoming the problems caused by local minima in the
error function of nonlinear classifiers. In addition it is demonstrated that
pairs of readout neurons can transform the complex trajectory of transient
states of a large neural circuit into a simple and clearly structured
two-dimensional trajectory. This two-dimensional projection of the
high-dimensional trajectory can even exhibit convergence to virtual
attractors that are not apparent in the high-dimensional trajectory. � 2003
Wiley Periodicals, Inc. Key Words: cortical microcircuits; machine
learning; p-delta rule
Reference: S. Haeusler, H. Markram, and W. Maass.
Perspectives of the high-dimensional dynamics of neural microcircuits from the
point of view of low-dimensional readouts.
Complexity (Special Issue on Complex Adaptive Systems), 8(4):39-50,
2003.