Seminar Computational Intelligence A (708.111)

WS 2016

Institut für Grundlagen der Informationsverarbeitung (7080)

Lecturer:
O.Univ.-Prof. Dr. Wolfgang Maass

Office hours: by appointment (via e-mail)

E-mail: maass@igi.tugraz.at
Homepage: https://igi-web.tugraz.at/people/maass/


Assoc. Prof. Dr. Robert Legenstein

Office hours: by appointment (via e-mail)

  E-mail: robert.legenstein@igi.tugraz.at
Homepage: www.igi.tugraz.at/legi/




Location: IGI-seminar room, Inffeldgasse 16b/I, 8010 Graz
Date: starting from October 10th  2016, Mondays 16:15 - 18:00 p.m.
Zeugnis: Ein Vortrag in diesem Seminar kann auch als Zeugnis für ein anders Seminar des Institutes angerechnet werden.
(Credits: A talk in this seminar can also be used to take formal credits for other seminars of the institute.)

Content of the seminar: Towards an integration of deep learning and neuroscience

The paper

Marblestone, Adam, Greg Wayne, and Konrad Kording (2016). "Toward an Integration of Deep Learning and Neuroscience." Frontiers in Computational Neuroscience, 10. [PDF@Frontiers].

hypothesizes that biological neuronal systems may utilize learning processes that share similarities with deep learning techniques. We will discuss in this semester this paper and selected references therein.

Abstract: Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.



Papers:

Biological Implementation of Optimization:

(1) R. C. O’Reilly, D. Wyatte, and J. Rohrlich (2014). Learning through time in the thalamocortical loops. arXiv:1407.3432v1. https://arxiv.org/abs/1407.3432

Cost Functions for Unsupervised Learning / Prediction:

(2) W. Lotter, G. Kreiman, and D. Cox (2016). Unsupervised learning of visual structure using predictive generative networks. arXiv:1511. http://arxiv.org/abs/1511.06380

Repurposing Reinforcement Learning for Diverse Internal Cost Functions:

(3) R. C. O’Reilly, T. E. Hazy, J. Mollick, P. Mackie, and S. Herd (2014). Goal-driven cognition in the brain: A computational framework. arXiv:1404.7591v1. https://arxiv.org/abs/1404.7591

(4) J. O. Rombouts, S. M. Bohte, and P. R. Roelfsema (2015). How attention can create synaptic tags for the learning of working memories in sequential tasks. PLOS Computational Biology | DOI:10.1371/journal.pcbi.1004060. http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004060

Hierarchical Control

(5) G. Wayne, and L. F. Abbott (2014). Hierarchical control using networks trained with higher-level forward models. Neural Comput 26(10):2163-2193. doi: 10.1162/NECO_a_00639. http://www.ncbi.nlm.nih.gov/pubmed/25058706

(6) T. J. Sejnowski, H. Poizner, G. Lynch, S. Gepshtein, and R. J. Greenspan (2014). Prospective Optimization. Proc IEEE Inst Electr Electron Eng. 2014 May;102(5):799-811. DOI: 10.1109/JPROC.2014.2314297. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4201124/

Insights from Deep Learning

(7) E. Jonas, and K. Kording (2016). Could a neuroscientist understand a microprocessor?. bioRxiv, 055624. http://www.biorxiv.org/content/early/2016/05/26/055624.abstract

(8) I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton (2013). On the importance of initialization and momentum in deep learning. Proceedings of the 30 th International Conference on Machine Learning, Atlanta, Georgia, USA, 2013. JMLR: W&CP volume 28. http://www.cs.toronto.edu/~fritz/absps/momentum.pdf

(9) J. Yosinski, J. Clune, and Y. Bengio (2014). How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems 27: 3320-3328. http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks

(10) C. Gülcehre, and Y. Bengio (2016). Knowledge matters: Importance of prior information for optimization. Journal of Machine Learning Research, 17(8), 1-32. www.jmlr.org/papers/volume17/gulchere16a/gulchere16a.pdf



Talks:

Date
Speaker
Talks

24.10.2016          16:15-18:00
Maass, Legenstein
Introduction
PDF
21.11.2016          15:45-18:00
Absenger, Mulle
Goal-driven cognition in the brain: A computational framework. R. C. O’Reilly, T. E. Hazy, J. Mollick, P. Mackie, and S. Herd (2014)
PDF
28.11.2016          15:45-18:00 Marchetto, Raggam
Learning through time in the thalamocortical loops. R. C. O’Reilly, D. Wyatte, and J. Rohrlich (2014)
PDF
Harb, Micorek Prospective Optimization. T. J. Sejnowski, H. Poizner, G. Lynch, S. Gepshtein, and R. J. Greenspan (2014)
PDF
12.12.2016           15:45-18:00 Steger, Zöhrer
Unsupervised learning of visual structure using predictive generative networks. W. Lotter, G. Kreiman, and D. Cox (2016) PDF
Wohlhart, Müller
Could a neuroscientist understand a microprocessor?. E. Jonas, and K. Kording (2016) PDF
09.01.2017           15:45-18:00 Legenstein
A brief introduction into deep learning

Lindner, Narnhofer
Knowledge matters: Importance of prior information for optimization. C. Gülcehre, and Y. Bengio (2016) PDF
23.01.2017           15:45-18:00
Topic, Eibl
On the importance of initialization and momentum in deep learning. I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton (2013) PDF
Fuchs, Ainetter
How transferable are features in deep neural networks? J. Yosinski, J. Clune, and Y. Bengio (2014) PDF