Learning of depth two neural nets with constant fan-in at the hidden
P. Auer, S. Kwek, W. Maass, and M. K. Warmuth
We present algorithms for learning depth two neural networks where the hidden
nodes are threshold gates with constant fan-in. The transfer function of the
output node might be more general: we have results for the cases when the
threshold function, the logistic function or the identity function is used as
the transfer function at the output node. We give batch and on-line learning
algorithms for these classes of neural networks and prove bounds on the
performance of our algorithms. The batch algoritms work for real valued
inputs whereas the on-line algorithms assume that the inputs are discretized.
The hypotheses of our algorithms are essentially also neural networks of
depth two. However, their number of hidden nodes might be much larger than
the number of hidden nodes of the neural network that has to be learned. Our
algorithms can handle such a large number of hidden nodes since they rely on
multiplicative weight updates at the output node, and the performance of
these algorithms scales only logarithmically with the number of hidden nodes
Reference: P. Auer, S. Kwek, W. Maass, and M. K. Warmuth.
Learning of depth two neural nets with constant fan-in at the hidden nodes.
In Proc. of the 9th Conference on Computational Learning Theory 1996,
pages 333-343. ACM-Press (New York), 1996.