Concurrently with the investigation of neural computation in living organisms one has started to design electronic hardware that captures particular aspects of the special role that time and space play in biological neural computation (see [Mead, 1989] and [Murray, 1999]). Examples are artificial retinas [Mead, 1989], schemes for low power analog communication between chips via pulses (address-event-representation, see [Douglas et al., 1999,Mortara and Venier, 1999]) and cellular neural networks [Roska, 1997]. This approach is sometimes referred to as Neuromorphic Engineering. Obviously this area is still at a very early stage, and one might hope that theoretical computer science will play a role in its future development.
Within the concert of different scientific disciplines that investigate neural computation theoretical computer science has the chance to provide abstract models that capture essential aspects of biological neural systems in a simplified mathematical framework, thereby providing a platform for extracting ``portable'' computational mechanisms and principles that can potentially be transported to novel artificial computing machinery. In spite of the important contributions made by theoretical physicists, cognitive scientists and experts for information theory one can easily detect the specific shortcomings of these approaches in the current state of theoretical knowledge about neural computation: Statistical physics provides wonderful tools for modeling large homogeneous systems, but it is less suitable for analyzing computations in specific circuits made up of diverse units. Approaches from cognitive science often neglect to ask how the complexity of a proposed circuit scales up with the input size. Approaches from information theory are good at analyzing where and how information about an input is represented in neural circuits, but it is not clear whether they can also be used for analyzing efficient computations (whose goal is to produce an output, rather than preserving the input).
These observation show that there is still a need for contributions to neural computation that make use of specific strengths of theoretical computer science, such as expertise in the design and comparison of computational models, the investigation of the computational power of specific computational models, and the analysis of the computational complexity of specific computational tasks. On the other hand such contributions to neural computation require a strong effort towards interdisciplinary collaboration. There are very few problems arising from neural computation on which a theoretical computer scientist can start to work without further interaction. In most cases an ongoing interchange with experimental neuroscientists (and/or experts for neuromorphic engineering) is necessary in order to avoid that one focuses on questions or parameter ranges that are less relevant.
If the theoretical computer science community decides that it wants to include neural computation among its research topics, some organized efforts appear to be necessary. Beneficial short term measures would be the inclusion of tutorials on neural computation in conferences for theoretical computer science, as well as the organization of informal workshops jointly with neuroscientists. One of the primary goals of such workshops should be the identification of specific topics in neural computation and learning where contributions from theoretical computer science can be expected to be of value. The next step would be the formation of research networks where research on these specific topics can be carried out by theoretical computer scientists in collaboration with neuroscientists and experts for neuromorphic engineering.