The short answer to this question is: basically everything. More specifically one can point to three basic aspects where the structure of neural computation differs from the structure of computation in present-day computers:
One may argue that these three are among the most important aspects of a computation. Hence a computer scientists working on neural computation should be ready to revise fundamental methods and assumptions of his discipline. Various approaches got stuck because they were too timid in this regard. For example threshold circuits (and more generally: most common artificial neural network models) employ computational units that have some aspects in common with biological neurons, but they handle time in the traditional way of computer science (assuming a global clock). Therefore neuroscientists have a hard time relating these models to biological neural systems. There are a few basic facts to which one can point in order to highlight the different role that time plays in biological neural systems. Common biological neural systems have no central clock that marks those points in time when computational units should change their output, and when signals are to be read. Rather, the temporal dynamics of neural systems is input driven and salient aspects of the input and internal state representations are encoded in the temporal relationships between different computational events. Furthermore inputs and outputs of neural systems frequently consist of time series of analog values, rather than of a single batch of numbers or bits like in the common computational models of theoretical computer science.
A closer look at the computational units of neural systems immediately
reveals their particular capability for carrying out computations on
time series. The output of a biological neuron consists of
``action potentials'' or ``spikes'' (see Fig. [1]). More
precisely: a biological neuron does not output any number or
bit, instead it marks points in time.
The special role of space in neural computation becomes clear if one takes into account that the total length of wires is one of the most severe bottlenecks for neural circuits (see [Braitenberg and Schuez,1998,Chklovskii and Stevens, 1998]). From the point of view of theoretical computer science this is quite fortunate, since the known upper bounds for the total length of wires in neural circuits severely restrict the types of circuit architectures that are biologically realistic. For example most threshold circuits and artificial neural networks require full connectivity between two layers of (at least) linear size. But this connectivity structure would require too much total wire length, so that this fact alone makes them already unrealistic as models for commonly studied neural systems such as primary visual cortex. Thus we arrive at the challenging problem to come up with alternative circuit architectures that make more efficient use of wire length by employing more sophisticated spatial arrangements of computational units and data. This topic had already been investigated in theoretical computer science quite a while ago in the context of VLSI-design. The precise complexity constraints on wire length are somewhat different for neural circuits, since here the 3rd dimension can be used more extensively for the routing of wires. We refer to [Legenstein and Maass, 2000] for a precise formulation of total wire length as a complexity measure for neural circuits, and first results on circuit architectures that appear to be realistic from this point of view. Such top-down approach towards finding realistic design strategies for neural circuits is a rich source of research problems where methods from theoretical computer science can be applied. Another attractive goal is understanding the computational role of recurrent connections in neural circuits.
The program of a neural circuit consists of those aspects of its architecture that are fixed in the genetic code and of the results of learning. A straightforward calculation shows that only a relatively small portion of the parameters that are needed to describe a neural circuit can possibly be contained in the genetic code. Consequently one can understand neural circuits only if one also manages to understand the learning algorithms that shape and maintain them. Unfortunately empirical data on the learning mechanisms employed by neural systems are quite complex. For example, a few years ago it was discovered that the relative timing of the firing of the pre- and postsynaptic neuron is essential for the ``learning'' of a synapse [Markram et al., 1997], thereby placing many earlier learning theories based on ``static'' learning rules in jeopardy. We can expect to see during the next few years more empirical data regarding the way in which the different parameters that control the inherent dynamics of a synapse (or in the language of computer science: the transition function of the finite automaton implemented by the synapse) change during learning. It is widely believed by neuroscientists that learning is implemented simultaneously by a variety of different mechanisms that act on different time scales. A challenging goal for a theoretical computer scientist is to provide plausible models for the global organization of these learning mechanisms. Such models are urgently needed, but they can hardly be inferred directly from empirical data. Here it will be necessary to ``guess'' reasonable models, and to look for predictions of such models which can be tested empirically.