Homepage of Wolfgang Maass
Wolfgang Maass: Publications
Link to Google Scholar
Link to PubMed
Talk at the 5th International Convention on the Mathematics Of Neuroscience and AI, Rome 2024
Talk at the Allen Institute, Seattle, October 2023
Talk at the conference From Neuroscience to Artificially Intelligent Systems (NAISys), April 2022
Talk at the Nature conference "At the Interface of Brain and Machine", April 2021
Keynote at NICE 2021: Biological inspiration for improving computing and learning in spiking neural networks
Berkeley Lecture 2018 on Networks of Spiking Neurons Learn to Learn and Remember
Berkeley Lectures 2018 on Computations in Networks of Neurons in the Brain Part I -
Part II -
Slides
Berkeley Lectures 2015 on Searching for Principles of Brain Computation -
Slides
Waterloo Brain Day Lectures 2013 - video lecture
This list is also available as BiBTeX file.
- [269]
- Yujie Wu and Wolfgang Maass.
A simple model for behavioral time scale
synaptic plasticity provides content addressable memory with binary synapses
and one-shot learning one-shot learning and robust recall with btsp, a
biological synaptic plasticity rule.
Nature Communications, 2025.
(Link to bioRxiv PDF)
- [268]
- Guoqi Li, Lei Deng, Huajin Tang, Gang
Pan, Yonghong Tian, Kaushik Roy, and Wolfgang Maass.
Brain-inspired computing: A systematic survey
and future trends.
Proceedings of the IEEE, 112(6):544-584, 2024.
(Link to IEEE Xplore
version)
- [267]
- Wolfgang Maass.
How can neuromorphic hardware attain brain-like
functional capabilities?.
National Science Review, 11(5):nwad301, 12 2023.
(Link to PDF,
Supplementary material PDF)
- [266]
- Anand Subramoney, Guillaume
Bellec, Franz Scherr, Robert Legenstein, and Wolfgang Maass.
Fast learning without synaptic
plasticity in spiking neural networks.
Scientific Reports, 14(1):8557, 2024.
Published on April 12, 2024.
(Link to Scientific
Reports version)
- [265]
- Christoph Stoeckl, Yukun Yang, and
Wolfgang Maass.
Local prediction-learning in
high-dimensional spaces enables neural networks to plan.
Nature Communications, 15, 03 2024.
(Link to Nature
Communications version)
- [264]
- J. Galván Fraile, Franz Scherr,
José J. Ramasco, Anton Arkhipov, Wolfgang Maass, and Claudio R. Mirasso.
Modeling circuit mechanisms of opposing
cortical responses to visual flow perturbations.
PLOS Computational Biology, 20(3):1-27, 03 2024.
(Link to PLOS
version)
- [263]
- Guozhang Chen, Franz Scherr, and
Wolfgang Maass.
Data-based large-scale models provide a
window into the organization of cortical computations.
bioRxiv, 2023.
(Link to bioRxiv PDF)
- [262]
- Guozhang Chen, Franz Scherr, and Wolfgang
Maass.
A data-based large-scale model for primary
visual cortex enables brain-like robust and versatile visual processing.
Science Advances, 8(44):eabq7592, 2022.
(Link to Science Advances
version)
- [261]
- Luke Y. Prince, Roy Henha Eyono, Ellen
Boven, Arna Ghosh, Joe Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte
Costa, Wolfgang Maass, Blake A. Richards, Cristina Savin, and Katharina Anna
Wilmes.
Current state and future directions for
learning in biological recurrent neural networks: A perspective piece.
Neurons, Behavior, Data analysis, and Theory, 1, 2022.
(Link to NBDT version, Link
to arXiv version PDF)
- [260]
- C. Kraisnikovic, W. Maass, and
R. Legenstein.
Spike-based symbolic computations
on bit strings and numbers.
Neuro-Symbolic Artificial Intelligence: The State of the Art,
342:214, 2022.
P Hitzler, M K Sarker (Eds).
(PDF).
(Link to PDF)
- [259]
- A. Rao, P. Plank, A. Wild, and
W. Maass.
A Long Short-Term Memory for AI
Applications in Spike-based Neuromorphic Hardware.
Nature Machine Intelligence, 4:467-479, 2022.
(PDF).
(Link to PDF)
- [258]
- F. Zenke, S. M. Bohté, C. Clopath,
I. M. Com c sa, J. Göltz, W. Maass, T. Masquelier, R. Naud, E. O.
Neftci, M. A. Petrovici, F. Scherr, and D. F. M. Goodman.
Visualizing a joint future of neuroscience
and neuromorphic engineering.
Neuron, 109(4):571-575, 2021.
(PDF).
(Link to PDF)
- [257]
- Franz Scherr and W. Maass.
Learning-to-learn for neuromorphic hardware.
Neuromorphic Computing and Engineering, 2:022501, 2022.
in D.V. Christensen et. all 2022, A roadmap on neuromorphic computing and
engineering.
(PDF).
(Link to PDF)
- [256]
- F. Scherr, C. Stoeckl, and W. Maass.
One-shot learning with spiking neural
networks.
bioRxiv, 2020.
(PDF).
(Supplementary material PDF)
- [255]
- D. Salaj, A. Subramoney,
C. Kraisnikovic, R. Legenstein G. Bellec, and W. Maass.
Spike-frequency adaptation supports network
computations on temporally dispersed information.
eLife, 10:e65459, 2021.
(PDF).
Supplementary material PDF (Link to eLife version, Link to
bioRxiv version PDF)
- [254]
- M. G. Müller, C. H.
Papadimitriou, W. Maass, and R. Legenstein.
A model for structured information
representation in neural networks of the brain.
eNeuro, 7(3), 2020.
(Journal link to PDF)
- [253]
- C. Papadimitriou, S. Vempala,
D. Mitropolsky, M. Collins, and W. Maass.
Brain computation by assemblies of
neurons.
PNAS, 117(25):14464-14472, 2020.
(Link to journal version PDF),(Draft on
biorxiv PDF)
- [252]
- C. Stoeckl and W. Maass.
Optimized spiking neurons can classify
images with high accuracy through temporal coding with two spikes.
Nature Machine Intelligence, 3:230-238, 2021.
Draft on arXiv.
(PDF).
Link to journal version PDF),(Link to
arXiv version PDF)
- [251]
- C. Stoeckl and W. Maass.
Recognizing images with at most one
spike per neuron.
arXiv:2001.01682v3, 2019.
(PDF).
(Link to arXiv version PDF)
- [250]
- A. Subramoney, F. Scherr, and
W. Maass.
Reservoirs learn to learn.
In Reservoir Computing: Theory, Physical Implementations, and
Applications, K. Nakajima and I. Fischer, editors. Springer, 2020.
(PDF).
Draft on arXiv:1909.07486v1
- [249]
- J. Kaiser, M. Hoff, A. Konle,
J. C. V. Tieck, D. Kappel, D. Reichard, A. Subramoney, R. Legenstein,
A. Roennau, W. Maass, and R. Dillmann.
Embodied synaptic plasticity with online
reinforcement learning.
Frontiers in Neurorobotics, 13(81), 2019.
(PDF).
(Journal link to PDF)
- [248]
- G. Bellec, F. Scherr, A. Subramoney,
E. Hajek, D. Salaj, R. Legenstein, and W. Maass.
A solution to the learning dilemma for
recurrent networks of spiking neurons.
Nature Communications, 11:3625, 2020.
(PDF).
Supplementary material PDF, Supplementary
movies PDF,
(Commentary by Manneschi, L. & Vasilaki, E. (2020). An alternative to
backpropagation through time. In Nature Machine Intelligence, 2(3), 155-156.
PDF)
- [247]
- T. Bohnstingl, F. Scherr,
C. Pehle, K. Meier, and W. Maass.
Neuromorphic hardware learns to
learn.
Frontiers in Neuroscience, 13:483, 2019.
(PDF).
- [246]
- G. Bellec, F. Scherr, E. Hajek,
D. Salaj, R. Legenstein, and W. Maass.
Biologically inspired alternatives to
backpropagation through time for learning in recurrent neural nets.
arxiv.org/abs/1901.09049, January 2019.
(PDF).
- [245]
- C. Liu, G. Bellec, B. Vogginger,
D. Kappel, J. Partzsch, F. Neumärker, S. Höppner, W. Maass, S. B. Furber,
R. Legenstein, and C. G. Mayr.
Memory-efficient deep learning on a spinnaker
2 prototype.
Frontiers in Neuroscience, 2018.
(PDF).
- [244]
- N. Ananri, C. Daskalakis, W. Maass,
C. H. Papadimitriou, A. Saberi, and S. Vempala.
Smoothed analysis of discrete tensor
decomposition and assemblies of neurons.
32nd Conference on Neural Information Processing Systems (NIPS 2018),
Montreal, Canada, 2018.
(PDF).
- [243]
- G. Bellec, D. Salaj, A. Subramoney,
R. Legenstein, and W. Maass.
Long short-term memory and
learning-to-learn in networks of spiking neurons.
32nd Conference on Neural Information Processing Systems (NIPS 2018),
Montreal, Canada, 2018.
(PDF).
- [242]
- R. Legenstein, W. Maass, C. H.
Papadimitriou, and S. S. Vempala.
Long term memory and the densest
K-subgraph problem.
In Proc. of Innovations in Theoretical Computer Science (ITCS),
2018.
(PDF).
- [241]
- G. Bellec, D. Kappel, W. Maass, and
R. Legenstein.
Deep rewiring: training very sparse deep
networks.
International Conference on Learning Representations (ICLR), 2018.
(PDF).
- [240]
- C. Pokorny, M. J. Ison, A. Rao,
R. Legenstein, C. Papadimitriou, and W. Maass.
STDP forms associations between memory
traces in networks of spiking neurons.
Cerebral Cortex, 30(3):952-968, 2020.
(PDF).
(Supplementary material PDF), (Journal link to
PDF)
- [239]
- R. Legenstein, Z. Jonke,
S. Habenschuss, and W. Maass.
A probabilistic model for learning in
cortical microcircuit motifs with data-based divisive inhibition.
arXiv:1707.05182, 2017.
(PDF).
- [238]
- Z. Jonke, R. Legenstein,
S. Habenschuss, and W. Maass.
Feedback inhibition shapes emergent
computational properties of cortical microcircuit motifs.
Journal of Neuroscience, 37(35):8511-8523, 2017.
(PDF).
- [237]
- D. Kappel, R. Legenstein,
S. Habenschuss, M. Hsieh, and W. Maass.
A dynamic connectome supports the
emergence of stable computational function of neural circuits through
reward-based learning.
eNeuro, 2 April, 2018.
(PDF).
- [236]
- M. A. Petrovici, S. Schmitt,
J. Klähn, D. Stöckel, A. Schroeder, G. Bellec, J. Bill, O. Breitwieser,
I. Bytschok, A. Grübl, M. Güttler, A. Hartel, S. Hartmann, D. Husmann,
K. Husmann, , S. Jeltsch, V. Karasenko, M. Kleider, C. Koke, A. Kononov,
C. Mauch, P. Müller, J. Partzsch, T. Pfeil, S. Schiefer, S. Scholze,
A. Subramoney, V. Thanasoulis, B. Vogginger, R. Legenstein, W. Maass,
R. Schüffny, C. Mayr, J. Schemmel, and K. Meier.
Pattern representation and recognition
with accelerated analog neuromorphic systems.
arXiv:1703.06043, 2017.
(PDF).
- [235]
- S. Schmitt, J. Klähn, G. Bellec,
A. Grübl, M. Güttler, A. Hartel, S. Hartmann, D. Husmann, K. Husmann,
S. Jeltsch, V. Karasenko, M. Kleider, C. Koke, A. Kononov, C. Mauch,
E. Müller, P. Müller, J. Partzsch, M. A. Petroviciy, S. Schiefer,
S. Scholze, V. Thanasoulis, B. Vogginger, R. Legenstein, W. Maass, C. Mayr,
R. Schüffny, J. Schemmel, and K. Meier.
Neuromorphic hardware in the loop:
Training a deep spiking network on the BrainScaleS Wafer-Scale
System.
In IEEE International Joint Conference on Neural Networks (IJCNN)
2017, pages 2227-2234, 2017.
(PDF).
- [234]
- W. Maass, C. H. Papadimitriou,
S. Vempala, and R. Legenstein.
Brain computation: A computer science
perspective.
Draft of an invited contribution to Springer Lecture Notes in Computer
Science, vol. 10000, 2017.
(PDF).
- [233]
- R. Legenstein, C. H.
Papadimitriou, S. Vempala, and W. Maass.
Assembly pointers for variable binding
in networks of spiking neurons.
arXiv preprint arXiv:1611.03698, 2016.
(PDF).
- [232]
- W. Maass.
Energy-efficient neural network chips approach human recognition capabilities.
PNAS, 113(40):doi/10.1073/pnas.1614109113, 2016.
(PDF).
- [231]
- Z. Yu, D. Kappel, R. Legenstein, S. Song,
F. Chen, and W. Maass.
CaMKII activation supports reward-based
neural network optimization through Hamiltonian sampling.
arXiv:1606.00157, 2016.
(PDF).
- [230]
- D. Pecevski and W. Maass.
Learning probabilistic inference
through STDP.
eNeuro, 2016.
(PDF).
- [229]
- Z. Jonke, S. Habenschuss, and W. Maass.
Solving constraint satisfaction problems
with networks of spiking neurons.
Front. Neurosci., 30 March, 2016.
(Journal link to PDF)
- [228]
- W. Maass.
Searching for principles of brain
computation.
Current Opinion in Behavioral Sciences (Special Issue on Computational
Modelling), 11:81-92, 2016.
(PDF).
- [227]
- W. Maass.
To spike or not to spike: That is the
question.
Proceedings of the IEEE, 103(12):2219-2224, 2015.
(PDF).
- [226]
- D. Kappel, S. Habenschuss,
R. Legenstein, and W. Maass.
Synaptic sampling: A Bayesian approach
to neural network plasticity and rewiring.
In Advances in Neural Information Processing Systems 28,
C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors,
pages 370-378. Curran Associates, Inc., 2015.
(PDF).
- [225]
- D. Kappel, S. Habenschuss,
R. Legenstein, and W. Maass.
Network plasticity as Bayesian
inference.
PLOS Computational Biology, 11(11):e1004485, 2015.
(Journal link to PDF)
- [224]
- J. Bill, L. Buesing, S. Habenschuss,
B. Nessler, W. Maass, and R. Legenstein.
Distributed Bayesian computation and
self-organized learning in sheets of spiking neurons with local lateral
inhibition.
PLOS ONE, 10(8):e0134356, 2015.
(Journal link to PDF)
- [223]
- Z. Jonke, S. Habenschuss, and W. Maass.
A theoretical basis for efficient
computations with noisy spiking neurons.
arXiv.org, arXiv:1412.5862, 2014.
(PDF).
- [222]
- R. Legenstein and W. Maass.
Ensembles of spiking neurons with
noise support optimal probabilistic inference in a dynamically changing
environment.
PLOS Computational Biology, 10(10):e1003859, 2014.
(Journal link to PDF)
- [221]
- W. Maass.
Noise as a resource for computation and
learning in networks of spiking neurons.
Special Issue of the Proc. of the IEEE on "Engineering Intelligent
Electronic Systems based on Computational Neuroscience",
102(5):860-880, 2014.
(PDF).
- [220]
- D. Kappel, B. Nessler, and W. Maass.
STDP installs in winner-take-all
circuits an online approximation to hidden Markov model learning.
PLOS Computational Biology, 10(3):e1003511, 2014.
(Journal link to PDF)
- [219]
- S. Habenschuss, Z. Jonke, and
W. Maass.
Stochastic computations in cortical
microcircuit models.
PLOS Computational Biology, 9(11):e1003311, 2013.
(PDF).
(Additional technical information PDF)
- [218]
- S. Klampfl and W. Maass.
Emergence of dynamic memory traces in
cortical microcircuit models through STDP.
The Journal of Neuroscience, 33(28):11515-11529, 2013.
(PDF).
- [217]
- B. Nessler, M. Pfeiffer, L. Buesing,
and W. Maass.
Bayesian computation emerges in generic
cortical microcircuits through spike-timing-dependent plasticity.
PLOS Computational Biology, 9(4):e1003037, 2013.
(Journal link to PDF)
- [216]
- S. Habenschuss, H. Puhr, and
W. Maass.
Emergence of optimal decoding of
population codes through STDP.
Neural Computation, 25(6):1371-1407, 2013.
(PDF).
- [215]
- E. A. Rueckert, G. Neumann,
M. Toussaint, and W. Maass.
Learned graphical models for
probabilistic planning provide a new class of movement primitives.
Frontiers in Computational Neuroscience, 6:1-20, 2013.
doi:10.3389/fncom.2012.00097.
(PDF).
(Journal link to PDF)
- [214]
- G. M. Hoerzer, R. Legenstein, and
Wolfgang Maass.
Emergence of complex computational
structures from chaotic neural networks through reward-modulated Hebbian
learning.
Cerebral Cortex, 24:677-690, 2014.
(PDF).
(Supplementary material PDF)
- [213]
- D. Probst, W. Maass, H. Markram, and
M. O. Gewaltig.
Liquid computing in a simplified model of
cortical layer IV: Learning to balance a ball.
In Proceedings of the 22nd International Conference on Artificial Neural
Networks and Machine Learning -- ICANN 2012, Alessandro E.P. Villa,
Wlodzislaw Duch, Peter Erdi, Francesco Masulli, and Günther Palm, editors,
volume 7552 of Lecture Notes in Computer Science, pages
209-216. Springer, 2012.
(PDF).
(Journal link to PDF)
- [212]
- H. Hauser, A. J. Ijspeert, R. M.
Füchslin, R. Pfeifer, and W. Maass.
The role of feedback in morphological
computation with compliant bodies.
Biological Cybernetics, published 06 Sept 2012.
doi: 10.1007/s00422-012-0516-4.
(PDF).
(Journal link to PDF)
- [211]
- S. Klampfl, S. V. David, P. Yin,
S. A. Shamma, and W. Maass.
A quantitative analysis of information
about past and present stimuli encoded by spikes of A1 neurons.
Journal of Neurophysiology, 108:1366-1380, 2012.
(PDF).
(Journal link to abstract PDF)
- [210]
- M. Pfeiffer, M. Hartbauer, A. B.
Lang, W. Maass, and H. Römer.
Probing real sensory worlds of receivers
with unsupervised clustering.
PLoS ONE, 7(6):e37354. doi:10.1371, 2012.
(PDF).
(Journal link to PDF)
- [209]
- H. Hauser, A. J. Ijspeert, R. M.
Füchslin, R. Pfeifer, and W. Maass.
Towards a theoretical foundation for
morphological computation with compliant bodies.
Biological Cybernetics, 105(5-6):355-370, 2011.
(PDF).
(Journal link to PDF)
- [208]
- D. Pecevski, L. Büsing, and
W. Maass.
Probabilistic inference in general
graphical models through sampling in stochastic networks of spiking
neurons.
PLoS Computational Biology, 7(12):e1002294, 2011.
(Journal link to PDF)
- [207]
- L. Büsing, J. Bill, B. Nessler, and
W. Maass.
Neural dynamics as sampling: A model for
stochastic computation in recurrent networks of spiking neurons.
PLoS Computational Biology, 7(11):e1002211, 2011.
(Journal link to PDF)
- [206]
- R. Legenstein and W. Maass.
Branch-specific plasticity enables
self-organization of nonlinear computation in single neurons.
The Journal of Neuroscience, 31(30):10787-10802, 2011.
(PDF).
(Commentary by R. P. Costa and P. J. Sjöström in Frontiers in Synaptic
Neuroscience PDF)
- [205]
- H. Hauser, G. Neumann, A. J. Ijspeert,
and W. Maass.
Biologically inspired kinematic synergies
enable linear balance control of a humanoid robot.
Biological Cybernetics, 104(4-5):235-249, 2011.
(PDF).
(Journal link to PDF)
- [204]
- M. J. Rasch, K. Schuch, N. K.
Logothetis, and W. Maass.
Statistical comparision of spike responses
to natural stimuli in monkey area V1 with simulated responses of a detailed
laminar network model for a patch of V1.
Journal of Neurophysiology, 105:757-778, 2011.
(PDF).
(Commentary by W.S. Anderson and B. Kreiman in Current Biology 2011 PDF)
- [203]
- J. Bill, K. Schuch, D. Brüderle,
J. Schemmel, W. Maass, and K. Meier.
Compensating inhomogeneities of neuromorphic
VLSI devices via short-term synaptic plasticity.
Frontiers in Computational Neuroscience, 4:1-14, 2010.
doi:10.3389/fncom.2010.00129.
(PDF).
(Journal link to the PDF)
- [202]
- S. Klampfl and W. Maass.
A theoretical basis for emergent pattern
discrimination in neural systems through slow feature extraction.
Neural Computation, 22(12):2979-3035, 2010.
Epub 2010 Sep 21.
(PDF).
- [201]
- R. Legenstein, S. M. Chase, A. B.
Schwartz, and W. Maass.
A reward-modulated Hebbian learning
rule can explain experimentally observed network reorganization in a brain
control task.
The Journal of Neuroscience, 30(25):8400-8410, 2010.
(PDF).
- [200]
- D. Nikolic, S. Haeusler, W. Singer,
and W. Maass.
Distributed fading memory for stimulus
properties in the primary visual cortex.
PLoS Biology, 7(12):1-19, 2009.
(Journal link to PDF)
- [199]
- R. Legenstein and W. Maass.
An integrated learning rule for
branch strength potentiation and STDP.
39th Annual Conference of the Society for Neuroscience, Program 895.20,
Poster HH36, 2009.
- [198]
- S. Klampfl, S.V. David, P. Yin, S.A.
Shamma, and W. Maass.
Integration of stimulus history in
information conveyed by neurons in primary auditory cortex in response to
tone sequences.
39th Annual Conference of the Society for Neuroscience, Program 163.8,
Poster T6, 2009.
- [197]
- S. Liebe, G. Hoerzer, N.K. Logothetis,
W. Maass, and G. Rainer.
Long range coupling between V4 and PF
in theta band during visual short-term memory.
39th Annual Conference of the Society for Neuroscience, Program 652.20,
Poster Y31, 2009.
- [196]
- S. Haeusler, K. Schuch, and
W. Maass.
Motif distribution and computational
performance of two data-based cortical microcircuit templates.
38th Annual Conference of the Society for Neuroscience, Program
220.9, 2008.
- [195]
- L. Buesing and W. Maass.
A spiking neuron as information
bottleneck.
Neural Computation, 22:1961-1992, 2010.
(PDF).
- [194]
- M. Pfeiffer, B. Nessler,
R. Douglas, and W. Maass.
Reward-modulated Hebbian Learning
of Decision Making.
Neural Computation, 22:1399-1444, 2010.
(PDF).
- [193]
- R. Legenstein, S. A. Chase, A. B.
Schwartz, and W. Maass.
Functional network reorganization in
motor cortex can be explained by reward-modulated Hebbian learning.
In Proc. of NIPS 2009: Advances in Neural Information Processing
Systems, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors,
volume 22, pages 1105-1113. MIT Press, 2010.
(PDF).
- [192]
- S. Klampfl and W. Maass.
Replacing supervised classification
learning by Slow Feature Analysis in spiking neural networks.
In Proc. of NIPS 2009: Advances in Neural Information Processing
Systems, volume 22, pages 988-996. MIT Press, 2010.
(PDF).
- [191]
- B. Nessler, M. Pfeiffer, and
W. Maass.
STDP enables spiking neurons to detect
hidden causes of their inputs.
In Proc. of NIPS 2009: Advances in Neural Information Processing
Systems, volume 22, pages 1357-1365. MIT Press, 2010.
(PDF).
- [190]
- R. Legenstein, S. A. Chase, A. B.
Schwartz, and W. Maass.
A model for learning effects in motor
cortex that may facilitate the brain control of neuroprosthetic devices.
38th Annual Conference of the Society for Neuroscience, Program
517.6, 2008.
- [189]
- W. Maass.
Liquid state machines: Motivation, theory, and
applications.
In Computability in Context: Computation and Logic in the Real
World, B. Cooper and A. Sorbi, editors, pages 275-296. Imperial
College Press, 2010.
(PDF).
- [188]
- G. Neumann, W. Maass, and J. Peters.
Learning complex motions by sequencing
simpler motion templates.
In Proc. of the 26th Int. Conf. on Machine Learning (ICML 2009),
Montreal, 2009.
(PDF).
- [187]
- A. Steimer, W. Maass, and R. Douglas.
Belief-propagation in networks of spiking
neurons.
Neural Computation, 21:2502-2523, 2009.
(PDF).
- [186]
- D. Buonomano and W. Maass.
State-dependent computations:
Spatiotemporal processing in cortical networks.
Nature Reviews in Neuroscience, 10(2):113-125, 2009.
(PDF).
- [185]
- S. Haeusler, K. Schuch, and
W. Maass.
Motif distribution, dynamical
properties, and computational performance of two data-based cortical
microcircuit templates.
J. of Physiology (Paris), 103(1-2):73-87, 2009.
(PDF).
- [184]
- B. Nessler, M. Pfeiffer, and
W. Maass.
Hebbian learning of Bayes optimal
decisions.
In Proc. of NIPS 2008: Advances in Neural Information Processing
Systems, 21, 2009.
MIT Press.
(PDF).
- [183]
- R. Legenstein, D. Pecevski, and
W. Maass.
A learning theory for
reward-modulated spike-timing-dependent plasticity with application to
biofeedback.
PLoS Computational Biology, 4(10):e1000180, 2008.
(Journal link to PDF)
- [182]
- L. Buesing and W. Maass.
Simplified rules and theoretical
analysis for information bottleneck optimization and PCA with spiking
neurons.
In Proc. of NIPS 2007, Advances in Neural Information Processing
Systems, volume 20. MIT Press, 2008.
(PDF).
- [181]
- R. Legenstein, D. Pecevski, and
W. Maass.
Theoretical analysis of learning with
reward-modulated spike-timing-dependent plasticity.
In Proc. of NIPS 2007, Advances in Neural Information Processing
Systems, volume 20, pages 881-888. MIT Press, 2008.
(PDF).
- [180]
- G. Neumann, M. Pfeiffer, and
W. Maass.
Efficient continuous-time reinforcement
learning with adaptive state graphs.
In Proceedings of the 18th European Conference on Machine Learning (ECML)
and the 11th European Conference on Principles and Practice of Knowledge
Discovery in Databases (PKDD) 2007, Warsaw (Poland). Springer
(Berlin), 2007.
in press.
(PDF).
- [179]
- S. Klampfl, R. Legenstein, and
W. Maass.
Spiking neurons can learn to solve
information bottleneck problems and extract independent components.
Neural Computation, 21(4):911-959, 2009.
(PDF).
- [178]
- W. Maass.
Liquid computing.
In Proceedings of the Conference CiE'07: COMPUTABILITY IN EUROPE 2007,
Siena (Italy), Lecture Notes in Computer Science, pages 507-516.
Springer (Berlin), 2007.
(PDF).
- [177]
- S. Haeusler, W. Singer, W. Maass,
and D. Nikolic.
Superposition of information in large
ensembles of neurons in primary visual cortex.
37th Annual Conference of the Society for Neuroscience, Program 176.2,
Poster II23, 2007.
- [176]
- D. Sussillo, T. Toyoizumi, and
W. Maass.
Self-tuning of neural circuits through
short-term synaptic plasticity.
Journal of Neurophysiology, 97:4079-4095, 2007.
(PDF).
(Supplementary material PDF)
- [175]
- H. Hauser, G. Neumann, A. J. Ijspeert,
and W. Maass.
Biologically inspired kinematic synergies
provide a new paradigm for balance control of humanoid robots.
In Proceedings of the IEEE-RAS 7th International Conference on
Humanoid Robots (Humanoids 2007), 2007.
Best Paper Award.
(PDF).
- [174]
- H. Jaeger, W. Maass, and J. Principe.
Special issue on echo state networks and liquid state machines.
Neural Networks, 20(3):287-289, 2007.
(PDF).
- [173]
- M. J. Rasch, A. Gretton, Y. Murayama,
W. Maass, and N. K. Logothetis.
Inferring spike trains from local field
potentials.
Journal of Neurophysiology, 99:1461-1476, 2008.
(PDF).
- [172]
- S. Klampfl, R. Legenstein, and
W. Maass.
Information bottleneck optimization and
independent component extraction with spiking neurons.
In Proc. of NIPS 2006, Advances in Neural Information Processing
Systems, volume 19, pages 713-720. MIT Press, 2007.
(PDF).
- [171]
- D. Nikolic, S. Haeusler,
W. Singer, and W. Maass.
Temporal dynamics of information content
carried by neurons in the primary visual cortex.
In Proc. of NIPS 2006, Advances in Neural Information Processing
Systems, volume 19, pages 1041-1048. MIT Press, 2007.
(PDF).
- [170]
- R. Legenstein and W. Maass.
On the classification capability of
sign-constrained perceptrons.
Neural Computation, 20(1):288-309, 2008.
(PDF).
- [169]
- W. Maass.
Book review of
"Imitation of life: how biology is inspiring computing" by Nancy
Forbes.
Pattern Analysis and Applications, 8(4):390-391, 2006.
Springer (London).
(PDF).
- [168]
- W. Maass, P. Joshi, and E. D. Sontag.
Computational aspects of feedback in
neural circuits.
PLoS Computational Biology, 3(1):e165, 2007.
(Journal link to PDF)
- [167]
- K. Uchizawa, R. Douglas, and
W. Maass.
Energy complexity and entropy of
threshold circuits.
In Proceedings of the 33rd International Colloquium on Automata,
Languages and Programming, ICALP (1) 2006, Venice, Italy, July 10-14, 2006,
Part I, M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener, editors,
volume 4051 of Lecture Notes in Computer Science, pages
631-642. Springer, 2006.
(PDF).
- [166]
- R. Legenstein and W. Maass.
Edge of chaos and prediction of
computational performance for neural circuit models.
Neural Networks, 20(3):323-334, 2007.
(PDF).
- [165]
- R. Legenstein and W. Maass.
What makes a dynamical system
computationally powerful?.
In New Directions in Statistical Signal Processing: From Systems to
Brains, S. Haykin, J. C. Principe, T.J. Sejnowski, and J.G. McWhirter,
editors, pages 127-154. MIT Press, 2007.
(PDF).
- [164]
- W. Maass, P. Joshi, and E. D. Sontag.
Principles of real-time computing with
feedback applied to cortical microcircuit models.
In Advances in Neural Information Processing Systems, Y. Weiss,
B. Schoelkopf, and J. Platt, editors, volume 18, pages 835-842. MIT Press,
2006.
(PDF).
- [163]
- K. Uchizawa, R. Douglas, and
W. Maass.
On the computational power of threshold
circuits with sparse activity.
Neural Computation, 18(12):2994-3008, 2006.
(PDF).
- [162]
- S. Haeusler and W. Maass.
A
statistical analysis of information processing properties of lamina-specific
cortical microcircuit models.
Cerebral Cortex, 17(1):149-162, 2007.
(PDF).
- [161]
- R. Legenstein and W. Maass.
A criterion for the convergence of
learning with spike timing dependent plasticity.
In Advances in Neural Information Processing Systems, Y. Weiss,
B. Schoelkopf, and J. Platt, editors, volume 18, pages 763-770. MIT Press,
2006.
(PDF).
- [160]
- W. Maass, R. Legenstein, and
N. Bertschinger.
Methods for estimating the computational
power and generalization capability of neural microcircuits.
In Advances in Neural Information Processing Systems, L. K. Saul,
Y. Weiss, and L. Bottou, editors, volume 17, pages 865-872. MIT Press, 2005.
(PDF).
- [159]
- Y. Fregnac, M. Blatow, J.-P.
Changeux, J. de Felipe, A. Lansner, W. Maass, D. A. McCormick, C. M. Michel,
H. Monyer, E. Szathmary, and R. Yuste.
UPs and DOWNs in
cortical computation.
In The Interface between Neurons and Global Brain Function,
S. Grillner and A. M. Graybiel, editors, Dahlem Workshop Report 93, pages
393-433. MIT Press, 2006.
(PDF).
- [158]
- P. Joshi and W. Maass.
Movement generation with circuits of
spiking neurons.
Neural Computation, 17(8):1715-1738, 2005.
(PDF).
- [157]
- W. Maass and H. Markram.
Theory of the
computational function of microcircuit dynamics.
In The Interface between Neurons and Global Brain Function,
S. Grillner and A. M. Graybiel, editors, Dahlem Workshop Report 93, pages
371-390. MIT Press, 2006.
(PDF).
- [156]
- A. Kaske and W. Maass.
A model for the interaction of
oscillations and pattern generation with real-time computing in generic
neural microcircuit models.
Neural Networks, 19(5):600-609, 2006.
(PDF).
- [155]
- O. Melamed, W. Gerstner, W. Maass,
M. Tsodyks, and H. Markram.
Coding and learning of behavioral
sequences.
Trends in Neurosciences, 27(1):11-14, 2004.
(PDF).
- [154]
- R. Legenstein, C. Naeger, and
W. Maass.
What can a neuron learn with
spike-timing-dependent plasticity?.
Neural Computation, 17(11):2337-2382, 2005.
(PDF).
- [153]
- T. Natschlaeger and W. Maass.
Dynamics of information and
emergent computation in generic neural microcircuit models.
Neural Networks, 18(10):1301-1308, 2005.
(PDF).
- [151]
- P. Joshi and W. Maass.
Movement generation and control with
generic neural microcircuits.
In Biologically Inspired Approaches to Advanced Information Technology.
First International Workshop, BioADIT 2004, Lausanne, Switzerland, January
2004, Revised Selected Papers, A. J. Ijspeert, M. Murata, and
N. Wakamiya, editors, volume 3141 of Lecture Notes in Computer
Science, pages 258-273. Springer Verlag, 2004.
(PDF).
- [150]
- T. Natschlaeger and W. Maass.
Information dynamics and emergent
computation in recurrent circuits of spiking neurons.
In Proc. of NIPS 2003, Advances in Neural Information Processing
Systems, S. Thrun, L. Saul, and B. Schoelkopf, editors, volume 16,
pages 1255-1262, Cambridge, 2004. MIT Press.
(PDF).
- [149]
- W. Maass, T. Natschlaeger, and
H. Markram.
Computational models for generic cortical
microcircuits.
In Computational Neuroscience: A Comprehensive Approach, J. Feng,
editor, chapter 18, pages 575-605. Chapman & Hall/CRC, Boca Raton, 2004.
(PDF).
- [148]
- W. Maass, T. Natschlaeger, and
H. Markram.
Fading memory and kernel properties of
generic cortical microcircuit models.
Journal of Physiology -- Paris, 98(4-6):315-330, 2004.
(PDF).
- [147]
- W. Maass, T. Natschlaeger, and
H. Markram.
A model for real-time computation in
generic neural microcircuits.
In Proc. of NIPS 2002, Advances in Neural Information Processing
Systems, S. Becker, S. Thrun, and K. Obermayer, editors, volume 15,
pages 229-236. MIT Press, 2003.
(PDF).
- [146]
- W. Maass, R. Legenstein, and
H. Markram.
A new approach towards vision suggested by
biologically realistic neural microcircuit models.
In Biologically Motivated Computer Vision. Proc. of the Second
International Workshop, BMCV 2002, Tuebingen, Germany, November 22-24,
2002, H. H. Buelthoff, S. W. Lee, T. A. Poggio, and C. Wallraven,
editors, volume 2525 of Lecture Notes in Computer Science, pages
282-293. Springer (Berlin), 2002.
(PDF).
- [145]
- W. Maass.
On the computational power of neural microcircuit models: Pointers to the
literature.
In Proc. of the
International Conference on Artificial Neural Networks -- ICANN
2002, José R. Dorronsoro, editor, volume 2415 of Lecture
Notes in Computer Science, pages 254-256. Springer, 2002.
(PDF).
- [144]
- T. Natschlaeger, H. Markram, and
W. Maass.
Computer models and analysis tools
for neural microcircuits.
In Neuroscience Databases. A Practical Guide, R. Koetter, editor,
chapter 9, pages 121-136. Kluwer Academic Publishers (Boston), 2003.
(PDF).
- [143]
- T. Natschlaeger, W. Maass, and
H. Markram.
The "liquid computer": A novel
strategy for real-time computing on time series.
Special Issue on Foundations of Information Processing of
TELEMATIK, 8(1):39-43, 2002.
(PDF).
- [141]
- W. Maass.
Computing with spikes.
Special Issue on Foundations of Information Processing of
TELEMATIK, 8(1):32-36, 2002.
(PDF).
- [140]
- R. Legenstein, H. Markram, and
W. Maass.
Input prediction and autonomous
movement analysis in recurrent circuits of spiking neurons.
Reviews in the Neurosciences (Special Issue on Neuroinformatics of Neural
and Artificial Computation), 14(1-2):5-19, 2003.
(PDF).
- [139]
- Peter L. Bartlett and W. Maass.
Vapnik-Chervonenkis dimension of neural nets.
In The Handbook of Brain Theory and Neural Networks, M. A. Arbib,
editor, pages 1188-1192. MIT Press (Cambridge), 2nd edition, 2003.
(PDF).
- [138]
- W. Maass and H. Markram.
Temporal integration in recurrent microcircuits.
In The Handbook of Brain
Theory and Neural Networks, M. A. Arbib, editor, pages 1159-1163.
MIT Press (Cambridge), 2nd edition, 2003.
(PDF).
- [137]
- S. Haeusler, H. Markram, and
W. Maass.
Perspectives of the high-dimensional
dynamics of neural microcircuits from the point of view of low-dimensional
readouts.
Complexity (Special Issue on Complex Adaptive Systems),
8(4):39-50, 2003.
(PDF).
- [136]
- T. Natschlaeger and W. Maass.
Spiking neurons and the induction
of finite state machines.
Theoretical Computer Science: Special Issue on Natural Computing,
287:251-265, 2002.
(PDF).
- [135]
- W. Maass and H. Markram.
On the computational power of circuits
of spiking neurons.
Journal of Computer and System Sciences, 69(4):593-616, 2004.
(PDF).
- [134]
- R. A. Legenstein and W. Maass.
Optimizing the layout of a balanced
tree.
Technical Report, 2001.
(PDF).
- [133]
- R. A. Legenstein and W. Maass.
Neural circuits for pattern
recognition with small total wire length.
Theoretical Computer Science, 287:239-249, 2002.
(PDF).
- [132]
- R. A. Legenstein and W. Maass.
Wire length as a circuit complexity
measure.
Journal of Computer and System Sciences, 70:53-72, 2005.
(PDF).
- [131]
- G. Steinbauer, R. Koholka, and
W. Maass.
A very short story about autonomous
robots.
Special Issue on Foundations of Information Processing of
TELEMATIK, 8(1):26-29, 2002.
(PDF).
- [130]
- W. Maass, T. Natschlaeger, and
H. Markram.
Real-time computing without stable states:
A new framework for neural computation based on perturbations.
Neural Computation, 14(11):2531-2560, 2002.
(PDF).
- [129a]
- W. Maass.
wetware (English
version).
In TAKEOVER: Who is Doing the Art of Tomorrow (Ars
Electronica 2001), pages 148-152. Springer, 2001.
(PDF).
- [129b]
- W. Maass.
wetware (deutsche
Version).
In TAKEOVER: Who is Doing the Art of Tomorrow (Ars
Electronica 2001), pages 153-157. Springer, 2001.
(PDF).
- [128]
- W. Maass, G. Steinbauer, and
R. Koholka.
Autonomous fast learning in a mobile
robot.
In Sensor Based Intelligent Robots. International Workshop, Dagstuhl
Castle, Germany, October 15-25, 2000, Selected Revised Papers, G. D.
Hager, H. I. Christensen, H. Bunke, and R. Klein, editors, volume 2238 of
lncs, pages 345-356, 2002.
(PDF).
- [127]
- P. Auer, H. Burgsteiner, and W. Maass.
Reducing communication for distributed
learning in neural networks.
In http://www.springer.de/comp/lncs/index.html - Proc. of the
International Conference on Artificial Neural Networks -- ICANN 2002,
José R. Dorronsoro, editor, volume 2415 of Lecture Notes in
Computer Science, pages 123-128. Springer, 2002.
(PostScript).
(PDF).
- [126]
- P. Auer, H. Burgsteiner, and W. Maass.
A learning rule for very simple universal
approximators consisting of a single layer of perceptrons.
Neural Networks, 21(5):786-795, 2008.
(PDF).
- [125]
- T. Natschlaeger and W. Maass.
Computing the optimally fitted
spike train for a synapse.
Neural Computation, 13(11):2477-2494, 2001.
(PostScript).
(PDF).
- [124]
- T. Natschlaeger, W. Maass, and
A. Zador.
Efficient temporal processing with
biologically realistic dynamic synapses.
Network: Computation in Neural Systems, 12:75-87, 2001.
(PostScript).
(PDF).
- [123a]
- W. Maass.
Neural computation: a
research topic for theoretical computer science? Some thoughts and
pointers.
In Current Trends in Theoretical Computer Science, Entering the 21th
Century, Rozenberg G., Salomaa A., and Paun G., editors, pages
680-690. World Scientific Publishing, 2001.
(PostScript).
(PDF).
- [123b]
- W. Maass.
Neural computation: a research topic for theoretical computer science? Some
thoughts and pointers.
In Bulletin of the European Association for Theoretical Computer Science
(EATCS), volume 72, pages 149-158, 2000.
- [122]
- R. A. Legenstein and W. Maass.
Foundations for a circuit complexity
theory of sensory processing.
In Proc. of NIPS 2000, Advances in Neural Information Processing
Systems, T. K. Leen, T. G. Dietterich, and V. Tresp, editors,
volume 13, pages 259-265, Cambridge, 2001. MIT Press.
(PDF).
- [121]
- T. Natschlaeger and W. Maass.
Finding the key to a synapse.
In Advances in Neural Information Processing Systems (NIPS
'2000), Todd K. Leen, Thomas G. Dietterich, and Volker Tresp,
editors, volume 13, pages 138-144, Cambridge, 2001. MIT Press.
(PostScript).
(PDF).
The poster presented at NIPS is available as PDF file.
- [120]
- W. Maass, A. Pinz, R. Braunstingl,
G. Wiesspeiner, T. Natschlaeger, O. Friedl, and H. Burgsteiner.
Konstruktion von lernfaehigen mobilen Robotern im Studentenwettbewerb
``Robotik 2000'' an der Technischen Universitaet Graz.
in: Telematik, pages 20-24, 2000.
(PostScript).
(PDF).
- [119]
- W. Maass and H. Markram.
Synapses as dynamic memory buffers.
Neural Networks, 15:155-161, 2002.
(PostScript).
(PDF).
- [118]
- W. Maass.
Spike trains -- im Rhythmus
neuronaler Zellen.
In Katalog der steirischen Landesausstellung gr2000az, R. Kriesche
H. Konrad, editor, pages 36-42. Springer Verlag, 2000.
- [117]
- W. Maass.
Lernende Maschinen.
In Katalog der steirischen Landesausstellung gr2000az, R. Kriesche
H. Konrad, editor, pages 50-56. Springer Verlag, 2000.
- [116]
- W. Maass.
Neural computation with winner-take-all as the only nonlinear operation.
In Advances in Information Processing Systems, Sara A. Solla,
Todd K. Leen, and Klaus-Robert Mueller, editors, volume 12, pages 293-299.
MIT Press (Cambridge), 2000.
(PostScript).
(PDF).
- [115]
- T. Natschlaeger and W. Maass.
Fast analog computation in networks
of spiking neurons using unreliable synapses.
In ESANN'99 Proceedings of the European Symposium on Artificial Neural
Networks, pages 417-422, Bruges, Belgium, 1999.
(PostScript).
(PDF).
- [114]
- W. Maass.
Computation with spiking neurons.
In The Handbook of Brain
Theory and Neural Networks, M. A. Arbib, editor, pages 1080-1083.
MIT Press (Cambridge), 2nd edition, 2003.
(PostScript).
(PDF).
- [113]
- W. Maass.
On the computational power of winner-take-all.
Neural Computation, 12(11):2519-2535, 2000.
(PostScript).
(PDF).
- [112]
- W. Maass and T. Natschlaeger.
Emulation of Hopfield networks
with spiking neurons in temporal coding.
In Computational Neuroscience: Trends in Research, J. M. Bower,
editor, pages 221-226. Plenum Press, 1998.
(PostScript).
(PDF).
- [111]
- T. Natschlaeger, W. Maass, E. D.
Sontag, and A. Zador.
Processing of time series by neural
circuits with biologically realistic synaptic dynamics.
In Advances in Neural Information Processing Systems 2000 (NIPS
'2000), Todd K. Leen, Thomas G. Dietterich, and Volker Tresp,
editors, volume 13, pages 145-151, Cambridge, 2001. MIT Press.
(PostScript).
(PDF).
The poster presented at NIPS is available as PDF file.
- [110]
- W. Maass.
Paradigms for computing with spiking neurons.
In Models of Neural Networks. Early Vision and Attention, J. L.
van Hemmen, J. D. Cowan, and E. Domany, editors, volume 4, chapter 9, pages
373-402. Springer (New York), 2002.
(PostScript).
(PDF).
- [109]
- W. Maass and E. D. Sontag.
A precise characterization of the class of languages recognized by neural nets
under Gaussian and other common noise distributions.
In Advances in Neural Information Processing Systems, M. S.
Kearns, S. S. Solla, and D. A. Cohn, editors, volume 11, pages 281-287. MIT
Press (Cambridge), 1999.
(PostScript).
(PDF).
- [108]
- W. Maass.
Das menschliche Gehirn
-- nur ein Rechner?.
In Zur Kunst des Formalen Denkens, R. E. Burkard, W. Maass, and
P. Weibel, editors, pages 209-233. Passagen Verlag (Wien), 2000.
(PostScript).
(PDF).
- [107]
- W. Maass and E. D. Sontag.
Neural systems as nonlinear filters.
Neural Computation, 12(8):1743-1772, 2000.
(PostScript).
(PDF).
- [106]
- P. Auer and W. Maass.
Introduction to the special issue on computational learning theory.
Algorithmica, 22(1/2):1-2, 1998.
(PDF).
- [105]
- W. Maass.
Spiking neurons.
In Proceedings of the ICSC/IFAC Symposium on Neural Computation 1998
(NC'98), pages 16-20. ICSC Academic Press (Alberta), 1998.
Invited talk.
- [104]
- W. Maass.
Models for fast analog computation with spiking neurons.
In Proc. of the International Conference on Neural Information Processing
1998 (ICONIP'98) in Kytakyusyu, Japan, pages 187-188. IOS Press
(Amsterdam), 1998.
Invited talk at the special session on ``Dynamic Brain''.
- [103]
- W. Maass.
On the role of time and space in neural
computation.
In Proc. of the Federated Conference of CLS'98 and MFCS'98, Mathematical
Foundations of Computer Science 1998, volume 1450 of Lecture
Notes in Computer Science, pages 72-83. Springer (Berlin), 1998.
Invited talk.
(PostScript).
(PDF).
- [102]
- W. Maass and T. Natschlaeger.
A model for fast analog computation
based on unreliable synapses.
Neural Computation, 12(7):1679-1704, 2000.
(PostScript).
(PDF).
- [101]
- W. Maass and A. Zador.
Computing and learning with dynamic synapses.
In Pulsed Neural Networks, W. Maass and C. Bishop, editors, pages
321-336. MIT-Press (Cambridge), 1998.
(PostScript).
(PDF).
- [100]
- W. Maass.
Computing with spiking neurons.
In Pulsed Neural Networks, W. Maass and C. M. Bishop, editors,
pages 55-85. MIT Press (Cambridge), 1999.
(PostScript).
(PDF).
- [99]
- W. Maass and T. Natschlaeger.
Associative memory with networks of
spiking neurons in temporal coding.
In Neuromorphic Systems: Engineering Silicon from Neurobiology,
L. S. Smith and A. Hamilton, editors, pages 21-32. World Scientific, 1998.
(PostScript).
(PDF).
- [98]
- W. Maass and B. Ruf.
On computation with pulses.
Information and Computation, 148:202-218, 1999.
(PostScript).
(PDF).
- [97a]
- W. Maass.
On the relevance of time in neural computation
and learning.
Theoretical Computer Science, 261:157-178, 2001.
(PDF).
- [97b]
- W. Maass.
On the relevance of time in neural computation
and learning.
In Proc. of the 8th International Conference on Algorithmic Learning
Theory in Sendai (Japan), M. Li and A. Maruoka, editors, volume 1316
of Lecture Notes in Computer Science, pages 364-384. Springer
(Berlin), 1997.
(PostScript).
(PDF).
- [96]
- W. Maass and M. Schmitt.
On the complexity of learning for
spiking neurons with temporal coding.
Information and Computation, 153:26-46, 1999.
(PostScript).
(PDF).
- [95]
- W. Maass and E. Sontag.
Analog neural nets with Gaussian or
other common noise distributions cannot recognize arbitrary regular
languages.
Neural Computation, 11:771-782, 1999.
(PostScript).
(PDF).
- [94a]
- W. Maass and A. M. Zador.
Dynamic stochastic synapses as
computational units.
Neural Computation, 11(4):903-917, 1999.
(PostScript).
(PDF).
- [94b]
- W. Maass and A. M. Zador.
Dynamic stochastic synapses as
computational units.
In Advances in Neural Processing Systems, volume 10, pages
194-200. MIT Press (Cambridge), 1998.
(PostScript).
(PDF).
- [93]
- W. Maass and T. Natschlaeger.
Networks of spiking neurons can
emulate arbitrary Hopfield nets in temporal coding.
Network: Computation in Neural Systems, 8(4):355-371, 1997.
(PostScript).
(PDF).
- [92]
- W. Maass and M. Schmitt.
On the complexity of learning for a spiking neuron.
In Proc. of the 10th Conference on Computational Learning Theory
1997, pages 54-61. ACM-Press (New York), 1997.
See also Electronic Proc. of the Fifth International Symposium on Artificial
Intelligence and Mathematics (http://rutcor.rutgers.edu/~amai).
(PDF).
- [91]
- W. Maass.
A simple model for neural computation with
firing rates and firing correlations.
Network: Computation in Neural Systems, 9(3):381-397, 1998.
(PDF).
- [90]
- W. Maass.
Noisy spiking neurons with temporal coding
have more computational power than sigmoidal neurons.
In Advances in Neural Information Processing Systems, M. Mozer,
M. I. Jordan, and T. Petsche, editors, volume 9, pages 211-217. MIT Press
(Cambridge), 1997.
(PostScript).
(PDF).
- [89]
- W. Maass.
Analog computations with temporal coding in networks of spiking neurons.
In Spatiotemporal Models in Biological and Artificial Systems,
F. L. Silva, editor, pages 97-104. IOS-Press, 1997.
- [88]
- W. Maass and P. Weibel.
Ist die Vertreibung der Vernunft reversibel? Ueberlegungen zu einem
Wissenschafts- und Medienzentrum.
In Jenseits von Kunst, P. Weibel, editor, pages 745-747.
Passagen Verlag, 1997.
(PostScript).
(PDF).
- [87a]
- W. Maass and P. Orponen.
On the effect of analog noise in
discrete-time analog computations.
Neural Computation, 10:1071-1095, 1998.
(PostScript).
(PDF).
- [87b]
- W. Maass and P. Orponen.
On the effect of analog noise in
discrete-time analog computations.
In Advances in Neural Information Processing Systems, M. Mozer,
M. I. Jordan, and T. Petsche, editors, volume 9, pages 218-224. MIT Press
(Cambridge), 1997.
(PostScript).
(PDF).
- [85a]
- W. Maass.
Networks of spiking neurons: the third
generation of neural network models.
Neural Networks, 10:1659-1671, 1997.
(PostScript).
(PDF).
- [85b]
- W. Maass.
Networks of spiking neurons: the third
generation of neural network models.
In Proc. of the 7th Australian Conference on Neural Networks 1996 in
Canberra, Australia, pages 1-10, 1996.
(PDF).
- [84]
- P. Auer, S. Kwek, W. Maass, and M. K.
Warmuth.
Learning of depth two neural nets with
constant fan-in at the hidden nodes.
In Proc. of the 9th Conference on Computational Learning Theory
1996, pages 333-343. ACM-Press (New York), 1996.
(PostScript).
(PDF).
- [83]
- W. Maass.
A model for fast analog computations with
noisy spiking neurons.
In Computational Neuroscience: Trends in research, James Bower,
editor, pages 123-127, 1997.
(PostScript).
(PDF).
- [82]
- W. Maass.
Fast sigmoidal networks via spiking
neurons.
Neural Computation, 9:279-304, 1997.
(PostScript).
(PDF).
- [81]
- W. Maass.
Neuronale Netze und maschinelles Lernen am Institut fuer Grundlagen
der Informationsverarbeitung an der Technischen Universitaet Graz.
Telematik, 2:53-60, 1995.
(PDF).
- [80]
- W. Maass.
On the computational power of noisy spiking
neurons.
In Advances in Neural Information Processing Systems,
D. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, volume 8, pages
211-217. MIT Press (Cambridge), 1996.
(PostScript).
(PDF).
- [79]
- W. Maass and B. Ruf.
On the relevance of the shape of
postsynaptic potentials for the computational power of networks of spiking
neurons.
In Proc. of the International Conference on Artificial Neural Networks
ICANN, pages 515-520, Paris, 1995. EC2&Cie.
(PostScript).
(PDF).
- [78]
- W. Maass and G. Turan.
On learnability and predicate logic (extended abstract).
In Proc. of the 4th Bar-Ilan Symposium on Foundations of Artificial
Intelligence (BISFAI'95), pages 126-136, Jerusalem, 1995.
(PDF).
- [77]
- P. Auer, R. C. Holte, and W. Maass.
Theory and applications of agnostic
PAC-learning with small decision trees.
In Proc. of the 12th International Machine Learning Conference, Tahoe
City (USA), pages 21-29. Morgan Kaufmann (San Francisco), 1995.
(PostScript).
(PDF).
- [76]
- W. Maass.
Analog computations on networks of spiking neurons (extended abstract).
In Proc. of the 7th Italian Workshop on Neural Nets 1995, pages
99-104. World Scientific (Singapore), 1996.
(PDF).
- [75]
- W. Maass.
Lower bounds for the computational power of
networks of spiking neurons.
Neural Computation, 8(1):1-40, 1996.
(PostScript).
(PDF).
- [74]
- D. P. Dobkin, D. Gunopulos, and
W. Maass.
Computing the maximum bichromatic discrepancy, with applications to computer
graphics and machine learning.
Journal of Computer and System Sciences, 52(3):453-470, June
1996.
(PostScript).
(PDF).
- [73a]
- W. Maass and M. Warmuth.
Efficient learning with virtual threshold gates.
Information and Computation, 141(1):66-83, 1998.
(PostScript).
(PDF).
- [73b]
- W. Maass and M. Warmuth.
Efficient learning with virtual threshold gates.
In Proc. of the 12th International Machine Learning Conference, Tahoe
City, USA, Morgan Kaufmann (San Francisco), editor, pages 378-386,
1995.
- [72]
- W. Maass.
On the computational complexity of networks of spiking neurons.
In Advances in Neural Information Processing Systems, G. Tesauro,
D. S. Touretzky, and T. K. Leen, editors, volume 7, pages 183-190. MIT Press
(Cambridge), 1995.
(PostScript).
(PDF).
- [71]
- W. Maass.
On the complexity of learning on neural nets.
In Computational Learning Theory: EuroColt'93, J. Shawe-Taylor and
M. Anthony, editors, pages 1-17. Oxford University Press (Oxford), 1994.
(PostScript).
(PDF).
- [70]
- W. Maass.
Efficient agnostic PAC-learning with simple hypotheses.
In Proc. of the 7th Annual ACM Conference on Computational Learning
Theory, pages 67-75, 1994.
(PostScript).
(PDF).
- [69]
- W. Maass.
Computing on analog neural nets with arbitrary real weights.
In Theoretical Andvances in Neural Computation and Learning, V. P.
Roychowdhury, K. Y. Siu, and A. Orlitsky, editors, pages 153-172. Kluwer
Academics Publisher (Boston), 1994.
(PostScript).
(PDF).
- [68]
- W. Maass.
Vapnik-Chervonenkis dimension of neural nets.
In The Handbook of Brain Theory and Neural Networks, M. A. Arbib,
editor, pages 1000-1003. MIT Press (Cambridge), 1995.
(PostScript).
(PDF).
- [67]
- W. Maass.
Perspectives of current research about the complexity of learning on neural
nets.
In Theoretical Advances in Neural Computation and Learning, V. P.
Roychowdhury, K. Y. Siu, and A. Orlitsky, editors, pages 295-336. Kluwer
Academic Publishers (Boston), 1994.
(PostScript).
(PDF).
- [66a]
- W. Maass.
Neural nets with superlinear
VC-dimension.
Neural Computation, 6:877-884, 1994.
(PostScript).
(PDF).
- [66b]
- W. Maass.
Neural nets with superlinear VC-dimension.
In Proceedings of the International Conference on Artificial Neural
Networks 1994 (ICANN'94), pages 581-584. Springer (Berlin), 1994.
(PDF).
- [65a]
- W. Maass.
Agnostic PAC-learning of functions on analog neural nets.
Neural Computation, 7:1054-1078, 1995.
(PostScript).
(PDF).
- [65b]
- W. Maass.
Agnostic PAC-learning of functions on analog neural nets.
In Advances in Neural Information Processing Systems, volume 7,
pages 311-318, 1995.
(PostScript).
(PDF).
- [64a]
- P. Auer, P. M. Long, W. Maass, and
G. J. Woeginger.
On the complexity of function learning.
Machine Learning, 18:187-230, 1995.
Invited paper in a special issue of Machine Learning.
(PDF).
- [64b]
- P. Auer, P. M. Long, W. Maass, and G. J.
Woeginger.
On the complexity of function learning.
In Proceedings of the 5th Annual ACM Conference on Computational Learning
Theory, pages 392-401, 1993.
- [63]
- Z. Chen and W. Maass.
On-line learning of rectangles and unions of rectangles.
Machine Learning, 17:201-223, 1994.
Invited paper for a special issue of Machine Learning.
(PostScript).
(PDF).
- [62a]
- W. Maass.
Bounds for the computational power and learning complexity of analog neural
nets.
SIAM J. on Computing, 26(3):708-732, 1997.
(PostScript).
(PDF).
- [62b]
- W. Maass.
Bounds for the computational power and learning complexity of analog neural
nets.
In Proceedings of the 25th Annual ACM Symposium on Theory
Computing, pages 335-344, 1993.
(PostScript).
(PDF).
- [61]
- W. Maass, G. Schnitger, E. Szemeredi,
and G. Turan.
Two tapes versus one for off-line Turing machines.
Computational Complexity, 3:392-401, 1993.
(PDF).
- [60]
- Z. Chen and W. Maass.
A solution of the credit assignment problem in the case of learning rectangles.
In Proceedings of the 3rd Int. Workshop on Analogical and Inductive
Inference, volume 642 of Lecture Notes in Artificial
Intelligence, pages 26-34. Springer, 1992.
- [59]
- Z. Chen and W. Maass.
On-line learning of rectangles.
In Proceedings of the 5th Annual ACM Workshop on Computational Learning
Theory, pages 16-28, 1992.
(PDF).
- [58a]
- W. Maass, G. Schnitger, and E. Sontag.
A comparison of the computational power of sigmoid and boolean threshold
circuits.
In Theoretical Advances in Neural Computation and Learning, V. P.
Roychowdhury, K. Y. Siu, and A. Orlitsky, editors, pages 127-151. Kluwer
Academic Publishers (Boston), 1994.
(PDF).
- [58b]
- W. Maass, G. Schnitger, and E. Sontag.
On the computational power of sigmoid
versus boolean threshold circuits.
In Proc. of the 32nd Annual IEEE Symposium on Foundations of Computer
Science 1991, pages 767-776, 1991.
(PDF).
- [57]
- W. Maass.
On-line learning with an oblivious environment and the power of randomization.
In Proceedings of the 4th Annual ACM Workshop on Computational Learning
Theory, pages 167-175. Morgan Kaufmann (San Mateo), 1991.
(PDF).
- [56]
- W. Maass and G. Turan.
Algorithms and lower bounds for on-line learning of geometrical concepts.
Machine Learning, 14:251-269, 1994.
(PDF).
- [55a]
- W. J. Bultman and W. Maass.
Fast identification of geometric objects with membership queries.
Information and Computation, 118:48-64, 1995.
(PDF).
- [55b]
- W. J. Bultman and W. Maass.
Fast identification of geometric objects with membership queries.
In Proceedings of the 4th Annual ACM Workshop on Computational Learning
Theory,, pages 337-353, 1991.
- [54]
- W. Maass and G. Turan.
Lower bound methods and separation results
for on-line learning models.
Machine Learning, 9:107-145, 1992.
Invited paper for a special issue of Machine Learning.
(PDF).
- [53]
- A. Gupta and W. Maass.
A method for the efficient design of Boltzmann machines for classification
problems.
In Advances in Neural Information Processing Systems, R. P.
Lippmann, J. E. Moody, and D. S. Touretzky, editors, volume 3, pages
825-831. Morgan Kaufmann, (San Mateo), 1991.
(PDF).
- [52]
- W. Maass and T. A. Slaman.
Splitting and density for the recursive sets of a fixed time complexity.
In Proceedings of a Workshop on Logic from Computer Science, Y. N.
Moschovakis, editor, pages 359-372. Springer (Berlin), 1991.
(PDF).
- [51]
- W. Maass and T. A. Slaman.
The complexity types of computable sets.
Journal of Computer and System Sciences, 44:168-192, 1992.
Invited paper for a special issue of the J. Comput. Syst. Sci.
(PDF).
- [50]
- W. Maass and T. A. Slaman.
On the relationship between the complexity, the degree, and the extension of a
computable set.
In Proceedings of the 1989 Recursion Theory Week Oberwolfach,
pages 297-322. Springer (Berlin), 1990.
(PDF).
- [49]
- W. Maass and G. Turan.
How fast can a threshold gate learn.
In Computational Learning Theory and Natural Learning System: Constraints
and Prospects, S. J. Hanson, G. A. Drastal, and R. L. Rivest, editors,
pages 381-414. MIT Press (Cambridge), 1994.
(PDF).
- [48]
- W. Maass and G. Turan.
On the complexity of learning from counterexamples and membership queries.
In Proceedings of the 31th Annual IEEE Symposium on Foundations of
Computer Science, pages 203-210, 1990.
(PDF).
- [47]
- A. Hajnal, W. Maass, P. Pudlak,
M. Szegedy, and G. Turan.
Threshold circuits of bounded depth.
J. Comput. System Sci., 46:129-154, 1993.
(PDF).
- [46]
- M. Dietzfelbinger and
W. Maass.
The complexity of matrix transposition on one-tape off-line Turing machines
with output tape.
Theoretical Computer Science, 108:271-290, 1993.
(PDF).
- [45]
- M. Dietzfelbinger, W. Maass,
and G. Schnitger.
The complexity of matrix transposition on one-tape off-line Turing machines.
Theoretical Computer Science, 82:113-129, 1991.
(PDF).
- [44]
- W. Maass and G. Turán.
On the complexity of learning from counterexamples (extended abstract).
In Proceedings of the 30th Annual IEEE Symposium on Foundations of
Computer Science, pages 262-267, 1989.
(PDF).
- [43]
- W. Maass and T. A. Slaman.
Extensional properties of sets of time bounded complexity (extended abstract).
In Proceedings of the 7th International Conference on Fundamentals of
Computation Theory, volume 380 of Lecture Notes in Computer
Science, pages 318-326. Springer (Berlin), 1989.
(PDF).
- [42]
- W. Maass and T. A. Slaman.
The complexity types of computable sets (extended abstract).
In Proceedings of the 4th Annual Conference on Structure in Complexity
Theory, pages 231-239. IEEE Computer Society Press (Washington),
1989.
(PDF).
- [41]
- W. Maass and T. A. Slaman.
Some problems and results in the theory of actually computable functions.
In Proceedings of the Logic Colloquium '88, Padova, Italy, Ferro,
Bonotto, Valentini, and Zanardo, editors, pages 79-89. Elsevier Science
Publishers (North-Holland), 1989.
(PDF).
- [40]
- W. Maass and K. Sutner.
Motion planning among time dependent obstacles.
Acta Informatica, 26:93-122, 1988.
(PDF).
- [39]
- M. Dietzfelbinger and
W. Maass.
The complexity of matrix transposition on one-tape off-line Turing machines
with output tape.
In Proceedings of the 15th International Colloquium on Automata,
Languages and Programming, volume 317 of Lecture Notes in
Computer Science, pages 188-200. Springer (Berlin), 1988.
(PDF).
- [38]
- A. Hajnal, W. Maass, and G. Turan.
On the communication complexity of graph properties.
In Proceedings of the 20th Annual ACM Symposium on Theory of
Computing, pages 186-191, 1988.
(PDF).
- [37]
- N. Alon and W. Maass.
Meanders and their applications in lower
bound arguments.
J. Comput. System Sci., 37:118-129, 1988.
Invited paper for a special issue of J. Comput. System Sci.
(PDF).
- [36]
- M. Dietzfelbinger and
W. Maass.
Lower bound arguments with ``inaccesible'' numbers.
Journal of Computer and System Sciences, 36:313-335, 1988.
(PDF).
- [35]
- W. Maass.
On the use of inaccessible numbers and order
indiscernibles in lower bound arguments for random access machines.
J. Symbolic Logic, 53:1098-1109, 1988.
(PDF).
- [34]
- A. Hajnal, W. Maass, P. Pudlak,
M. Szegedy, and G. Turan.
Threshold circuits of bounded depth.
Journal of Computer and System Sciences, 46:129-154, 1993.
(PDF).
- [33]
- W. Maass, G. Schnitger, and
E. Szemeredi.
Two tapes are better than one for off-line turing machines.
In Proceedings of the 19th Annual ACM Symposium on Theory of
Computing, pages 94-100, 1987.
(PDF).
- [32]
- D. Hochbaum and W. Maass.
Fast approximation algorithms for a nonconvex covering problem.
J. Algorithms, 8:305-323, 1987.
(PDF).
- [31]
- W. Maass and A. Schorr.
Speed-up of Turing machines with one work tape and a two-way input tape.
SIAM J. Comput., 16:195-202, 1987.
(PDF).
- [30]
- N. Alon and W. Maass.
Meanders, ramsey's theorem and lower bounds for branching programs.
Proceedings of the 27th Annual IEEE Symposium on Foundations of Computer
Science, pages 410-417, 1986.
(PDF).
- [29]
- M. Dietzfelbinger and
W. Maass.
Two lower bound arguments with ``inaccessible'' numbers.
In Proceedings of the Structure in Complexity Theory Conference, Berkeley
1986, volume 223 of Lecture Notes in Computer Science,
pages 163-183. Springer (Berlin), 1986.
(PDF).
- [28]
- W. Maass and G. Schnitger.
An optimal lower bound for Turing machines with one work tape and two-way
input tape.
In Proceedings of the Structure in Complexity Theory Conference, Berkeley
1986, volume 223 of Lecture Notes in Computer Science,
pages 249-264. Springer (Berlin), 1986.
(PDF).
- [27]
- W. Maass.
On the complexity of nonconvex covering.
SIAM J. Computing, 15:453-467, 1986.
(PDF).
- [26]
- W. Maass.
Are recursion theoretic arguments useful in complexity theory.
In Proceedings of the International Conference on Logic, Methodology and
Philosphy of Science, Salzburg 1983, pages 141-158. North-Holland
(Amsterdam), 1986.
(PDF).
- [25]
- W. Maass.
Combinatorial lower bound arguments for
deterministic and nondeterministic Turing machines.
Transactions of the American Mathematical Society,
292(2):675-693, 1985.
hard copy.
(PDF).
- [24]
- M. Dietzfelbinger and
W. Maass.
Strong reducibilities in alpha- and beta-recursion theory.
In Proceedings of the 1984 Recursion Theory Week Oberwolfach,
Germany, volume 1141 of Lecture Notes in Mathematics,
pages 89-120. Springer (Berlin), 1985.
(PDF).
- [23]
- W. Maass.
Major subsets and automorphisms of recursively enumerable sets.
Proceedings of Symposia in Pure Mathematics, 42:21-32, 1985.
(PDF).
- [22]
- D. Hochbaum and W. Maass.
Approximation schemes for covering and packing problems in image processing and
VLSI.
J. Assoc. Comp. Mach., 32:130-136, 1985.
(PDF).
- [21]
- W. Maass.
Variations on promptly simple sets.
J. Symbolic Logic, 50:138-148, 1985.
(PDF).
- [20]
- W. Maass.
Quadratic lower bounds for deterministic and
nondeterministic one-tape Turing machines.
In Proceedings of 16th Annual ACM Symp. on Theory of Computing,
pages 401-408, 1984.
(PDF).
- [19]
- D. Hochbaum and W. Maass.
Approximation schemes for covering and packing problems in robotics and VLSI
(extended abstract).
In Proceedings of Symp. on Theoretical Aspects of Computer Science (Paris
1984), volume 166 of Lecture Notes in Computer Science,
pages 55-62. Springer (Berlin), 1984.
(PDF).
- [18]
- W. Maass.
On the orbits of hyperhypersimple sets.
J. Symbolic Logic, 49:51-62, 1984.
(PDF).
- [17]
- S. Homer and W. Maass.
Oracle dependent properties of the lattice
of NP-sets.
Theoretical Computer Science, 24:279-289, 1983.
(PDF).
- [16]
- W. Maass and M. Stob.
The intervals of the lattice of recursively enumerable sets determined by major
subsets.
Ann. of Pure and Applied Logic, 24:189-212, 1983.
(PDF).
- [15]
- W. Maass.
Characterization of recursively enumerable sets with supersets effectively
isomorphic to all recursively enumerable sets.
Trans. Amer. Math. Soc., 279:311-336, 1983.
(PDF).
- [14]
- W. Maass.
Recursively enumerable generic sets.
The Journal of Symbolic Logic, 47:809-823, 1983.
(PDF).
- [13]
- W. Maass.
Recursively invariant beta-recursion theory.
Ann. of Math. Logic, 21:27-73, 1981.
(PDF).
- [12]
- W. Maass.
A countable basis for sigma-one-two sets and recursion theory on aleph-one.
Proceedings Amer. Math. Soc., 82:267-270, 1981.
(PDF).
- [11]
- W. Maass, A. Shore, and M. Stob.
Splitting properties and jump classes.
Israel J. Math., 39:210-224, 1981.
(PDF).
- [10]
- W. Maass.
Recursively invariant beta-recursion theory -- a preliminary survey.
In Proceedings of the Conf. on Recursion Theory and Computational
Complexity, G. Lolli, editor, pages 229-236. Liguori editore
(Napoli), 1981.
(PDF).
- [9]
- W. Maass.
On alpha- and beta-recursively enumerable degrees.
Ann. of Math. Logic, 16:205-231, 1979.
(PDF).
- [8]
- W. Maass.
High alpha-recursively enumerable degrees.
In Generalized Recursion Theory II, E. Fenstad, R. O. Gandy, and
G. E. Sacks, editors, pages 239-269. North-Holland (Amsterdam), 1978.
(PDF).
- [7]
- W. Maass.
Contributions to alpha- and beta-recursion theory.
Habilitationsschrift, Ludwig-Maximilians-Universitaet Muenchen, 1978.
Minerva Publikation (Muenchen).
(PDF).
- [6]
- W. Maass.
Fine structure theory of the constructible universe in alpha- and
beta-recursion theory.
In Higher Set Theory, G. H. Mueller and D. Scott, editors, volume
669 of Lecture Notes in Mathematics, pages 339-359. Springer
(Berlin), 1978.
(PDF).
- [5]
- W. Maass.
The uniform regular set theorem in alpha-recursion theory.
J. Symbolic Logic, 43:270-279, 1978.
(PDF).
- [4]
- W. Maass.
Inadmissibility, tame r.e. sets and the admissible collapse.
Annals of Mathematical Logics, 13:149-170, 1978.
(PDF).
- [3]
- W. Maass.
On minimal pairs and minimal degrees in higher recursion theory.
Archive Math. Logik Grundlagen, 18:169-186, 1977.
(PDF).
- [2]
- W. Maass.
Eine Funktionalinterpretation der praedikativen Analysis.
Archive Math. Logik Grundlagen, 18:27-46, 1976.
(PDF).
- [1]
- W. Maass.
Church rosser theorem fuer lambda-kalkuele mit unendlich langen termen.
In Proof Theory Symposium Kiel 1974, J. Diller and G. H. Mueller,
editors, volume 500 of Lecture Notes in Mathematics, pages
257-263. Springer (Berlin), 1975.
(PDF).