Seminar Computational Intelligence B (708.112)

SS 2014

Institut f�r Grundlagen der Informationsverarbeitung (708)

Lecturer:
O.Univ.-Prof. Dr. Wolfgang Maass

Office hours: by appointment (via e-mail)

E-mail: maass@igi.tugraz.at
Homepage: https://igi-web.tugraz.at/people/maass/


Assoc. Prof. Dr. Robert Legenstein

Office hours: by appointment (via e-mail)

E-mail: robert.legenstein@igi.tugraz.at
Homepage: www.igi.tugraz.at/legi/




Location: IGI-seminar room, Inffeldgasse 16b/I, 8010 Graz
Date: starting from March 10th 2014;  following lectures: Monday, 15:00 p.m. - 18.00 p.m.
  and Tuesday: 16:15 p.m. - 18:00 p.m.  (TUGonline)

Content of the seminar:  Deep Learning

Recent advances in Deep Learning have brought about a shift in Machine Learning research. Due to novel training methods, Deep Neural Networks have become the most promising tool for the analysis of large data sets.
This was also recognized by major companies and we are currently witnessing that this mostly academic research discipline now becomes a major player in commercial computer science.
For example, Google has hired Goffrey Hinton, the inventor of deep belief networks and deep Boltzmann machines, and Google is acquiring an AI startup called DeepMind for more than 500 million dollars. Facebook has hired Yann Lecun, the inventor of convolutional neural networks (see http://deeplearning.net).

In this seminar, we will discuss recent publications on Deep Learning methods.

This seminar is designed for master students in their second year, or also in the first year, if they take simultaneously NN B (or have already taken one of the courses listed below).

The papers and talks will be on an introductory level, and will be accessible to an audience with minimal technical background. One didactical goal
of the seminar is to provide experience in giving presentation, which can be practiced here in a relaxed setting.
In addition, students can explore research topics in which they might be interested for a master thesis or -project.

One of the courses NN A, NN B, ML A, ML B suffices as background.



Talks:

Date

Time

Speaker

Topic

Tuesday, 06.05.14

16:15

Christoph Feichtenhofer, slides
 
Michael Jia Yow Hsieh, slides

on: ImageNet Classification with Deep Convolutional Neural Networks
(background on Conv. NNs: LeCun et al., 1998, Zeiler and Fergus, 2014)
on: Structural plasticity

Monday, 26.05.14

15:00

Anna Saranti, slides
Stefan Imlauer, Konstantin Lassnig, Stefan Loigge, slides DBM part 1

on: Variational Methods
on: An Efficient Learning Procedure for Deep Boltzmann Machines (part I)

Tuesday, 27.05.14

16:15

Stefan Imlauer, Konstantin Lassnig, Stefan Loigge, slides DBM part 2, slides DBM part 3

on: An Efficient Learning Procedure for Deep Boltzmann Machines (partII)

Tuesday, 10.06.14

16:15

Granit Luzhnica, slides
Gerhard Neuhold, slides

on: Multimodal Learning with Deep Boltzmann Machines
on: Learning with hierarchical-deep models (NIPS version)

Monday, 16.06.14

15:00

Miran Levar, slides

David Steyrl, slides

Martin Seeber, slides

on: Neuromorphic adaptations of restricted Boltzmann machines and deep belief networks
on: Building fast Bayesian computing machines out of intentionally stochastic, digital parts
on: Stochastic computations in cortical microcircuit models (Stationary distributions of network states)


Options for Talks:

username: lehre
password: on request robert.legenstein@igi.tugraz.at

  1. Neal, R. (1992).
    Connectionist learning of belief networks.
    Artificial Intelligence, 56: 71-113, 1992.
    Remark: This paper introduces sigmoidal belief networks and relates them to Boltzmann machines.
    This talk should be the basis for a later talk on deep belief networks.
    Talks: 1 For the talk, one may skip everything on noisy-or belief networks.
  1. G. E. Hinton, S. Osindero, and Y.-W. Teh.(2006) A fast learning algorithm for deep belief nets. Neural Computation, 18:1527-1554, 2006.
    The paper introduces Deep Belief Nets [DBNs] and how such networks can be trained efficiently.
    Talks: 2
    --
    (Sections 1-3): Intro; Complementary priors, RBMs and Contrastive Divergence
    --
    (Sections 4-7): Greedy learning; Up-Down Algorithm; Experiments on MNIST; Conclusions
  1. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. (1999)
    An introduction to variational methods for graphical models.
    Machine Learning, 37:183-233, 1999.
    Remark: This paper provides an introduction to variational methods, that will be needed for the training of Deep Boltzmann Machines.
    For the talk, only Sections 1, 2 (maybe), 4, and 6 up to 6.1 are relevant.
  1. R. Salakhutdinov and G.Hinton. (2012)
    An Efficient Learning Procedure for Deep Boltzmann Machines.
    Neural Computation, 24(8), 1967-2006.
    - - (Sections 1 and 2): Boltzmann machines, approximate evaluation of the data-dependen and data-independent distribtuions for learning.
    - - (Section 3): Learning DBMs.
    - - (Sections 4 and 5): Evaluating DBMs and experimental results.
  1. R. Salakhutdinov, J. B. Tenenbaum, and A. Torralba (2013)
    Learning with hierarchical-deep models.
    Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8), 1958-1971.
    (shorter NIPS-Version)
    --
    Sections 1-3
    --
    Sections 4-6
  1. N. Srivastava and R. Salakhutdinov (2012).
    Multimodal Learning with Deep Boltzmann Machines.
    NIPS 2012.
  1. V. Dumoulin, IJ Goodfellow, A. Courville, Y. Bengio (2013).
    On the Challenges of Physical Implementations of RBMs.
    arXiv preprint arXiv: 1312.5258
  1. E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs (2013).
    Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems.
    arXiv preprint arXiv: 1311.0966/.
  1. M. A. Petrovici, J. Bill, I. Bytschok, J. Schemmel, and K. H. Meier.
    Stochastic inference with deterministic spiking neurons.
    arXiv :1311.3211 [q-bio.NC]
  1. Y. Bengio
    Learning Deep Architectures for AI
    Foundations and Trends in Machine Learning: Vol. 2 (1), 1-127, 2009.


Papers related to Principles of Brain Computation :

  1. L. B�sing, J. Bill, B. Nessler, and W. Maass
    Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons.
    PLoS Computational Biology, published 03 Nov 2011. doi:10.1371/journal.pcbi.1002211 (pdf)
    Remark:This paper shows how Boltzman machines can be implemented by networks of spiking neurons.
    Talks: 1
  1. S. Habenschuss, Z. Jonke, and W. Maass. Stochastic computations in cortical microcircuit models. PLOS Computational Biology, 9(11):e1003311, 2013.
    Talks: 2
    -- Stationary distributions of network states
    -- Sudoku application
  1. S. Habenschuss, H. Puhr  and W. Maass. Emergence of optimal decoding of population codes through STDP.
    Neural Computation, 25(6): 1371-1407
    Talks: 2
  1. D. Kappel, B. Nessler and W. Maass. STDP installs in winner-take-all circu�its an online approximation to hidden Markov model learning.
    PLOS Computational Biology, in press
    Talks: 2