[Next] [Up] [Previous]
Next: Outlook Up: Neural Computation: A Research Previous: Introduction

What is Different in Neural Computation in Comparison with Computation in a Digital Computer?

The short answer to this question is: basically everything. More specifically one can point to three basic aspects where the structure of neural computation differs from the structure of computation in present-day computers:

One may argue that these three are among the most important aspects of a computation. Hence a computer scientists working on neural computation should be ready to revise fundamental methods and assumptions of his discipline. Various approaches got stuck because they were too timid in this regard. For example threshold circuits (and more generally: most common artificial neural network models) employ computational units that have some aspects in common with biological neurons, but they handle time in the traditional way of computer science (assuming a global clock). Therefore neuroscientists have a hard time relating these models to biological neural systems. There are a few basic facts to which one can point in order to highlight the different role that time plays in biological neural systems. Common biological neural systems have no central clock that marks those points in time when computational units should change their output, and when signals are to be read. Rather, the temporal dynamics of neural systems is input driven and salient aspects of the input and internal state representations are encoded in the temporal relationships between different computational events. Furthermore inputs and outputs of neural systems frequently consist of time series of analog values, rather than of a single batch of numbers or bits like in the common computational models of theoretical computer science.

A closer look at the computational units of neural systems immediately reveals their particular capability for carrying out computations on time series. The output of a biological neuron consists of ``action potentials'' or ``spikes'' (see Fig. [1]). More precisely: a biological neuron does not output any number or bit, instead it marks points in time.

  
Figure 1: a) Typical action potential (spike). b) A typical spike train produced by a neuron (each firing time marked by a bar)

The input to a biological neuron v consists of trains of pulses, socalled excitatory postsynaptic potentials (EPSP's) and inhibitory postsynaptic potentials (IPSP's) of a shape as indicated in Fig. [2].
 
Figure 2: a) Typical time course of an excitatory postsynaptic potential (EPSP). b) Typical time course of an inhibitory postsynaptic potential (IPSP). The vertical axis denotes the membrane voltage of the neuron v 

More precisely: About 1000 to 10000 other neurons u are each connected to v by a synapse. The synapse from neuron u to neuron v transforms the output spike train of neuron u (which is of a type as illustrated in Fig. [1] b)) into a train of EPSP's or IPSP's in neuron v. The neuron v ``fires'' and emits a spike whenever the current value of the sum of EPSP's and IPSP's from all presynaptic neurons u exceeds a certain threshold. A simulator of a small circuit of spiking neurons (which requires no background knowledge) is available from the homepage of our institute[8. This simulator gives already an idea of the complicated changes of the dynamics of a neural circuit that result from changes in the synaptic ``weights''. A glimpse at the temporal dynamics of a real neural system is provided in Fig. [3].


  
Figure 3: Simultaneous recordings (over 4 seconds) of the firing times of 30 neurons from monkey striate cortex by Krüger and Aiple [Krüger and Aiple, 1988]. Each firing is denoted by a vertical bar, with a separate row for each neuron. For comparison we have shaded an interval of 150 msec. This time span is known to suffice for the completion of some very complex computations, such as the recognition of a face, that involve several cortical areas.

The understanding of the temporal dynamics of neural computation is further complicated by the fact that the ``weight'' of a biological synapse (i.e. the amplitude of the resulting EPSP or IPSP) changes from spike to spike, even if there is currently no ``learning'' involved (see [Maass and Zador, 1999] for a survey and [Maass and Sontag, 1999] for mathematical models). Hence biological synapses resemble more finite automata, rather than just multiplicative factors (the synaptic ``weights''). In other words: synapses represent besides neurons a second type of computational unit in neural systems, that are specialized for processing time series. This biological insight has drastic consequences for understanding learning in neural systems, since it has become quite likely that learning should be viewed on the synaptic level as changes in the transition functions of the finite automata that encode the dynamic behavior of the synapses involved (see [Natschläger et al., 2000] for computer simulations). We refer to [Maass and Bishop, 1999] for further information on the role of temporal dynamics in neural computation and first attempts to employ similar mechanisms in artificial computing machinery. There exists substantial room for new ideas from computer science for exploiting the timing of computation steps as a resource for coding, communication and computation.

The special role of space in neural computation becomes clear if one takes into account that the total length of wires is one of the most severe bottlenecks for neural circuits (see [Braitenberg and Schuez,1998,Chklovskii and Stevens, 1998]). From the point of view of theoretical computer science this is quite fortunate, since the known upper bounds for the total length of wires in neural circuits severely restrict the types of circuit architectures that are biologically realistic. For example most threshold circuits and artificial neural networks require full connectivity between two layers of (at least) linear size. But this connectivity structure would require too much total wire length, so that this fact alone makes them already unrealistic as models for commonly studied neural systems such as primary visual cortex. Thus we arrive at the challenging problem to come up with alternative circuit architectures that make more efficient use of wire length by employing more sophisticated spatial arrangements of computational units and data. This topic had already been investigated in theoretical computer science quite a while ago in the context of VLSI-design. The precise complexity constraints on wire length are somewhat different for neural circuits, since here the 3rd dimension can be used more extensively for the routing of wires. We refer to [Legenstein and Maass, 2000] for a precise formulation of total wire length as a complexity measure for neural circuits, and first results on circuit architectures that appear to be realistic from this point of view. Such top-down approach towards finding realistic design strategies for neural circuits is a rich source of research problems where methods from theoretical computer science can be applied. Another attractive goal is understanding the computational role of recurrent connections in neural circuits.

The program of a neural circuit consists of those aspects of its architecture that are fixed in the genetic code and of the results of learning. A straightforward calculation shows that only a relatively small portion of the parameters that are needed to describe a neural circuit can possibly be contained in the genetic code. Consequently one can understand neural circuits only if one also manages to understand the learning algorithms that shape and maintain them. Unfortunately empirical data on the learning mechanisms employed by neural systems are quite complex. For example, a few years ago it was discovered that the relative timing of the firing of the pre- and postsynaptic neuron is essential for the ``learning'' of a synapse [Markram et al., 1997], thereby placing many earlier learning theories based on ``static'' learning rules in jeopardy. We can expect to see during the next few years more empirical data regarding the way in which the different parameters that control the inherent dynamics of a synapse (or in the language of computer science: the transition function of the finite automaton implemented by the synapse) change during learning. It is widely believed by neuroscientists that learning is implemented simultaneously by a variety of different mechanisms that act on different time scales. A challenging goal for a theoretical computer scientist is to provide plausible models for the global organization of these learning mechanisms. Such models are urgently needed, but they can hardly be inferred directly from empirical data. Here it will be necessary to ``guess'' reasonable models, and to look for predictions of such models which can be tested empirically.


FOOTNODES:
[8] see DEMOS on http://www.igi.tu-graz.ac.at/

[Next] [Up] [Previous]
Next: Outlook Up: Neural Computation: A Research Previous: Introduction
Heike Graf, 9/28/2000