Size: 3434
Comment:
|
Size: 4597
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 42: | Line 42: |
|| [[http://www.idsia.ch/NNcourse/intro.html|Introduction to Neural Networks (IDSIA)]] || [[mailto:g.montavon@mailbox.tu-berlin.de|Gregoire Montavon]] || || || [[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.6483|An Introduction to Boosting and Leveraging]] || Mikio Braun || || || PCA, CCA, and Kernel PCA[[http://www.face-rec.org/algorithms/Kernel/kernelPCA_scholkopf.pdf|The original kPCA paper (TR version)]] || Felix Bießmann || || || [[http://www.cis.hut.fi/aapo/papers/IJCNN99_tutorialweb/|ICA tutorial]] || Frank Meinecke || || || Predicting Structured Objects with Support Vector Machines [[http://www.yisongyue.com/publications/cacm2009_structsvm.pdf|PDF_short]] [[http://www.jmlr.org/papers/volume6/tsochantaridis05a/tsochantaridis05a.pdf|PDF_long]] || [[mailto:mkloft@cs.tu-berlin.de|Marius Kloft]], [[mailto:goernitz@cs.tu-berlin.de|Nico Görnitz]] || || || [[attachment:lect_notes_ol.pdf|Lecture Notes on Online Learning]] || [[mailto:mkloft@cs.tu-berlin.de|Marius Kloft]] || || || Rasmussen, C. E. and Kuss, M. Gaussian Processes in Reinforcement Learning, 2003<<BR>>hallo || || || || Jordan, M. I., Ghahramani, Z. and Jaakkola, T. S. ''An introduction to variational methods for graphical models, 1999 || || || |
|| Roweis, S. T. and Saul, L. K. Nonlinear Dimensionality Reduction by Locally Linear Embedding, 2000 || || || || MacKay, D. J. C. Gaussian Processes - A Replacement for Supervised Neural Networks?, 1997 || || || || Kschischang, , Frey, and Loeliger, Factor Graphs and the Sum-Product Algorithm, 2001 || || || || Rasmussen, C. E. Gaussian Processes in Machine Learning, 2003 || || || || Rabiner, L. R. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, 1989 || || || || Candes, and Tao, Decoding by Linear Programming, 2005 || || || || Kohonen, T. Self-organizing formation of topologically correct feature maps, 1982 || || || || Friedman, J., Hastie, T. and Tibshirani, R. Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting, 2000 || || || || Minka, T. P. Expectation Propagation for approximate Bayesian inference, 2001 || || || || Akaike, H. A new look at the statistical model identification, 1974 || || || || Candes, , Rudelson, , Tao, and Vershynin, Error Correction via Linear Programming, 2005 || || || || Tenenbaum, J. B., de Silva, V. and Langford, J. C. A Global Geometric Framework for Nonlinear Dimensionality Reduction, 2000 || || || || Andrieu, , de Freitas, , Doucet, and Jordan, An Introduction to MCMC for Machine Learning, 2003 || || || || Wipf, D. P., Palmer, J. A. and Rao, B. D. Perspectives on Sparse Bayesian Learning, 2003 || || || || Quinlan, R. Induction of decision trees, 1986 || || || || Hinton, G. E., Osindero, S. and Teh, Y. W. A Fast Learning Algorithm for Deep Belief Nets, 2006 || || || || Cesa-Bianchi, , Freund, , Haussler, , Helmbold, , Schapire, and Warmuth, How to Use Expert Advice, 1997 || || || || Neal, R. and Hinton, G. A View of the EM Algorithm that Justifies Incremental, Sparse, and other Variants, 1998 || || || || Neal, R. M. Probabilistic Inference using Markov Chain Monte Carlo Methods, 1993 || || || || Bryant, P. G. and Cordero-Brana, O. I. Model Selection Using the Minimum Description Length Principle, 2000 || || || || Jordan, M. I. and Jacobs, R. A. Hierarchical Mixtures of Experts and the EM Algorithm, 1994 || || || || Rasmussen, C. E. and Kuss, M. Gaussian Processes in Reinforcement Learning, 2003 || || || || Jordan, M. I., Ghahramani, Z. and Jaakkola, T. S. An introduction to variational methods for graphical models, 1999 || || || |
Block-Seminar "Classical Topics in Machine Learning"
Termine und Informationen
Erster Termin für Themenvergabe |
Mittwoch, 16.11.2011, 10:00-12:00 Uhr, Raum FR 6046 |
Verantwortlich |
|
Ansprechtpartner(in) |
|
Sprechzeiten |
Nach Vereinbarung |
Sprache |
Englisch |
Anrechenbarkeit |
Wahlpflicht LV im Modul Maschinelles Lernen I (Informatik M.Sc.) |
Inhalt
In diesem Seminar wird eine Auswahl klassischer Themen aus dem Bereich des Maschinellen Lernens behandelt. Die Spannbreite der Themen umfasst unüberwachten Lernverfahren (Dimensionsreduktion, Blinde Quellentrennung, Clustering, etc.), Klassifikations- und Regressionsalgorithmen (SVMs, Neuronale Netze, etc.) und Methoden zur Modellselektion.
Voraussetzungen
Wir empfehlen den Besuch der Vorlesung "Maschinelles Lernen I".
Ablauf
- Die Vorbesprechung findet am 16.11.2011 statt.
- Die Teilnehmer wählen bis Mitte Januar ein Thema in Absprache mit dem Betreuer (siehe Themenliste).
- Das Seminar findet als Blockveranstaltung am Ende des Semester statt (Termin wird noch bekanntgegeben).
Vorträge
Jeder Vortrag soll 35 Minuten (+ 10 Minuten Diskussion) dauern. Der Vortrag kann wahlweise auf Deutsch oder Englisch gehalten werden. Ein guter Vortrag führt kurz in das jeweilige Thema ein, stellt die Problemstellung dar und beschreibt zusammenfassend relevante Arbeiten und Lösungen.
Leistungsnachweis
Das Seminar ist Wahlpflichtbestandteil des Master-Module "Maschinelles Lernen 1". Bachelor-Studenten können diese Master-Module auf Antrag ebenfalls belegen. Die erfolgreiche Teilnahme am Seminar ist Voraussetzung für die Modul-Prüfung.
Themen
Die Vorträge sollen jeweils 35 Minuten (+ 10 Minuten Diskussion) dauern. Wir legen Wert auf diese Zeitvorgabe und werden Vorträge bei deutlicher Überschreitung abbrechen.
Paper(s) |
Betreuer |
Vortragender |
Roweis, S. T. and Saul, L. K. Nonlinear Dimensionality Reduction by Locally Linear Embedding, 2000 |
|
|
MacKay, D. J. C. Gaussian Processes - A Replacement for Supervised Neural Networks?, 1997 |
|
|
Kschischang, , Frey, and Loeliger, Factor Graphs and the Sum-Product Algorithm, 2001 |
|
|
Rasmussen, C. E. Gaussian Processes in Machine Learning, 2003 |
|
|
Rabiner, L. R. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, 1989 |
|
|
Candes, and Tao, Decoding by Linear Programming, 2005 |
|
|
Kohonen, T. Self-organizing formation of topologically correct feature maps, 1982 |
|
|
Friedman, J., Hastie, T. and Tibshirani, R. Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting, 2000 |
|
|
Minka, T. P. Expectation Propagation for approximate Bayesian inference, 2001 |
|
|
Akaike, H. A new look at the statistical model identification, 1974 |
|
|
Candes, , Rudelson, , Tao, and Vershynin, Error Correction via Linear Programming, 2005 |
|
|
Tenenbaum, J. B., de Silva, V. and Langford, J. C. A Global Geometric Framework for Nonlinear Dimensionality Reduction, 2000 |
|
|
Andrieu, , de Freitas, , Doucet, and Jordan, An Introduction to MCMC for Machine Learning, 2003 |
|
|
Wipf, D. P., Palmer, J. A. and Rao, B. D. Perspectives on Sparse Bayesian Learning, 2003 |
|
|
Quinlan, R. Induction of decision trees, 1986 |
|
|
Hinton, G. E., Osindero, S. and Teh, Y. W. A Fast Learning Algorithm for Deep Belief Nets, 2006 |
|
|
Cesa-Bianchi, , Freund, , Haussler, , Helmbold, , Schapire, and Warmuth, How to Use Expert Advice, 1997 |
|
|
Neal, R. and Hinton, G. A View of the EM Algorithm that Justifies Incremental, Sparse, and other Variants, 1998 |
|
|
Neal, R. M. Probabilistic Inference using Markov Chain Monte Carlo Methods, 1993 |
|
|
Bryant, P. G. and Cordero-Brana, O. I. Model Selection Using the Minimum Description Length Principle, 2000 |
|
|
Jordan, M. I. and Jacobs, R. A. Hierarchical Mixtures of Experts and the EM Algorithm, 1994 |
|
|
Rasmussen, C. E. and Kuss, M. Gaussian Processes in Reinforcement Learning, 2003 |
|
|
Jordan, M. I., Ghahramani, Z. and Jaakkola, T. S. An introduction to variational methods for graphical models, 1999 |
|
|