Size: 5141
Comment:
|
Size: 6429
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 44: | Line 44: |
|| Factor Graphs and the Sum-Product Algorithm <<BR>> Kschischang, , Frey, and Loeliger, , 2001 || || || | || Factor Graphs and the Sum-Product Algorithm [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54.1570&rep=rep1&type=pdf|link]] <<BR>> Kschischang, , Frey, and Loeliger, , 2001 || || || |
Line 46: | Line 46: |
|| A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition <<BR>> Rabiner, L. R., 1989 || || || || Decoding by Linear Programming <<BR>> Candes, and Tao, , 2005 || || || |
|| A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.131.2084&rep=rep1&type=pdf|link]] <<BR>> Rabiner, L. R., 1989 || || || || Decoding by Linear Programming [[http://arxiv.org/pdf/math/0502327|link]] <<BR>> Candes, and Tao, , 2005 || || || |
Line 49: | Line 49: |
|| Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting <<BR>> Friedman, J., Hastie, T. and Tibshirani, R., 2000 || || || || Expectation Propagation for approximate Bayesian inference <<BR>> Minka, T. P., 2001 || || || || A new look at the statistical model identification <<BR>> Akaike, H., 1974 || || || || Error Correction via Linear Programming <<BR>> Candes, , Rudelson, , Tao, and Vershynin, , 2005 || || || |
|| Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.7436&rep=rep1&type=pdf|link]] <<BR>> Friedman, J., Hastie, T. and Tibshirani, R., 2000 || || || || Expectation Propagation for approximate Bayesian inference [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.1319&rep=rep1&type=pdf|link]] <<BR>> Minka, T. P., 2001 || || || || A new look at the statistical model identification [[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1100705|link]] <<BR>> Akaike, H., 1974 || || || || Error Correction via Linear Programming [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.2255&rep=rep1&type=pdf|link]] <<BR>> Candes, , Rudelson, , Tao, and Vershynin, , 2005 || || || |
Line 54: | Line 54: |
|| An Introduction to MCMC for Machine Learning <<BR>> Andrieu, , de Freitas, , Doucet, and Jordan, , 2003 || || || | || An Introduction to MCMC for Machine Learning [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.7133&rep=rep1&type=pdf|link]] <<BR>> Andrieu, , de Freitas, , Doucet, and Jordan, , 2003 || || || |
Line 56: | Line 56: |
|| Induction of decision trees <<BR>> Quinlan, R., 1986 || || || | || Induction of decision trees [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.3624&rep=rep1&type=pdf|link]] <<BR>> Quinlan, R., 1986 || || || |
Line 58: | Line 58: |
|| How to Use Expert Advice <<BR>> Cesa-Bianchi, , Freund, , Haussler, , Helmbold, , Schapire, and Warmuth, , 1997 || || || | || How to Use Expert Advice [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.7476&rep=rep1&type=pdf|link]] <<BR>> Cesa-Bianchi, , Freund, , Haussler, , Helmbold, , Schapire, and Warmuth, , 1997 || || || |
Line 60: | Line 60: |
|| Probabilistic Inference using Markov Chain Monte Carlo Methods <<BR>> Neal, R. M., 1993 || || || | || Probabilistic Inference using Markov Chain Monte Carlo Methods [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36.9055&rep=rep1&type=pdf|link]] <<BR>> Neal, R. M., 1993 || || || |
Line 62: | Line 62: |
|| Hierarchical Mixtures of Experts and the EM Algorithm <<BR>> Jordan, M. I. and Jacobs, R. A., 1994 || || || || An introduction to variational methods for graphical models <<BR>> Jordan, M. I., Ghahramani, Z. and Jaakkola, T. S., 1999 || || || |
|| Hierarchical Mixtures of Experts and the EM Algorithm [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.7391&rep=rep1&type=pdf|link]] <<BR>> Jordan, M. I. and Jacobs, R. A., 1994 || || || || Gaussian Processes in Reinforcement Learning [[http://books.nips.cc/papers/files/nips16/NIPS2003_CN01.pdf|link]] <<BR>> Rasmussen, C. E. and Kuss, M., 2003 || || || || An introduction to variational methods for graphical models [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.4999&rep=rep1&type=pdf|link]] <<BR>> Jordan, M. I., Ghahramani, Z. and Jaakkola, T. S., 1999 || || || |
Block-Seminar "Classical Topics in Machine Learning"
Termine und Informationen
Erster Termin für Themenvergabe |
Mittwoch, 16.11.2011, 10:00-12:00 Uhr, Raum FR 6046 |
Verantwortlich |
|
Ansprechtpartner(in) |
|
Sprechzeiten |
Nach Vereinbarung |
Sprache |
Englisch |
Anrechenbarkeit |
Wahlpflicht LV im Modul Maschinelles Lernen I (Informatik M.Sc.) |
Inhalt
In diesem Seminar wird eine Auswahl klassischer Themen aus dem Bereich des Maschinellen Lernens behandelt. Die Spannbreite der Themen umfasst unüberwachten Lernverfahren (Dimensionsreduktion, Blinde Quellentrennung, Clustering, etc.), Klassifikations- und Regressionsalgorithmen (SVMs, Neuronale Netze, etc.) und Methoden zur Modellselektion.
Voraussetzungen
Wir empfehlen den Besuch der Vorlesung "Maschinelles Lernen I".
Ablauf
- Die Vorbesprechung findet am 16.11.2011 statt.
- Die Teilnehmer wählen bis Mitte Januar ein Thema in Absprache mit dem Betreuer (siehe Themenliste).
- Das Seminar findet als Blockveranstaltung am Ende des Semester statt (Termin wird noch bekanntgegeben).
Vorträge
Jeder Vortrag soll 35 Minuten (+ 10 Minuten Diskussion) dauern. Der Vortrag kann wahlweise auf Deutsch oder Englisch gehalten werden. Ein guter Vortrag führt kurz in das jeweilige Thema ein, stellt die Problemstellung dar und beschreibt zusammenfassend relevante Arbeiten und Lösungen.
Leistungsnachweis
Das Seminar ist Wahlpflichtbestandteil des Master-Module "Maschinelles Lernen 1". Bachelor-Studenten können diese Master-Module auf Antrag ebenfalls belegen. Die erfolgreiche Teilnahme am Seminar ist Voraussetzung für die Modul-Prüfung.
Themen
Die Vorträge sollen jeweils 35 Minuten (+ 10 Minuten Diskussion) dauern. Wir legen Wert auf diese Zeitvorgabe und werden Vorträge bei deutlicher Überschreitung abbrechen.
Paper(s) |
Betreuer |
Vortragender |
Nonlinear Dimensionality Reduction by Locally Linear Embedding link |
|
|
Gaussian Processes - A Replacement for Supervised Neural Networks? link |
|
|
Factor Graphs and the Sum-Product Algorithm link |
|
|
Gaussian Processes in Machine Learning link |
|
|
A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition link |
|
|
Decoding by Linear Programming link |
|
|
Self-organizing formation of topologically correct feature maps |
|
|
Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting link |
|
|
Expectation Propagation for approximate Bayesian inference link |
|
|
A new look at the statistical model identification link |
|
|
Error Correction via Linear Programming link |
|
|
A Global Geometric Framework for Nonlinear Dimensionality Reduction link |
|
|
An Introduction to MCMC for Machine Learning link |
|
|
Perspectives on Sparse Bayesian Learning link |
|
|
Induction of decision trees link |
|
|
A Fast Learning Algorithm for Deep Belief Nets link |
|
|
How to Use Expert Advice link |
|
|
A View of the EM Algorithm that Justifies Incremental, Sparse, and other Variants link |
|
|
Probabilistic Inference using Markov Chain Monte Carlo Methods link |
|
|
Model Selection Using the Minimum Description Length Principle link |
|
|
Hierarchical Mixtures of Experts and the EM Algorithm link |
|
|
Gaussian Processes in Reinforcement Learning link |
|
|
An introduction to variational methods for graphical models link |
|
|