T. Natschlaeger, W. Maass, E. D. Sontag, and A. Zador
Experimental data show that biological synapses behave quite differently from
the symbolic synapses in common artificial neural network models. Biological
synapses are dynamic, i.e., their weight changes on a short time scale by
several hundred percent in dependence of the past input to the synapse. In
this article we explore the consequences that this synaptic dynamics entails
for the computational power of feedforward neural networks. It turns out that
even with just a single hidden layer such networks can approximate a
surprisingly large class of nonlinear filters: all filters that can be
characterized by Volterra series. This result is robust with regard to
various changes in the model for synaptic dynamics. Furthermore we show that
simple gradient descent suffices to approximate a given quadratic filter by a
rather small neural system with dynamic synapses. We demonstrate that with
this approach the nonlinear filter considered in (Back and Tsoi 93) can be
approximated even better than by their model.