Size: 9810
Comment:
|
Size: 9814
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 44: | Line 44: |
||''Ensemble learning'' <<BR>> Induction of decision trees. Quinlan, R., 1986 [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.3624&rep=rep1&type=pdf|link]] Hierarchical Mixtures of Experts and the EM Algorithm. Jordan, M. I. and Jacobs, R. A., 1994 [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.7391&rep=rep1&type=pdf|link]] || || || | ||'''Ensemble learning''' <<BR>> Induction of decision trees. Quinlan, R., 1986 [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.3624&rep=rep1&type=pdf|link]] Hierarchical Mixtures of Experts and the EM Algorithm. Jordan, M. I. and Jacobs, R. A., 1994 [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.7391&rep=rep1&type=pdf|link]] || || || |
Block-Seminar "Classical Topics in Machine Learning"
Termine und Informationen
Erster Termin für Themenvergabe |
Mittwoch, 16.11.2011, 10:00-12:00 Uhr, Raum FR 6046 |
Verantwortlich |
|
Ansprechtpartner(in) |
|
Sprechzeiten |
Nach Vereinbarung |
Sprache |
Englisch |
Anrechenbarkeit |
Wahlpflicht LV im Modul Maschinelles Lernen I (Informatik M.Sc.) |
All information can be found in the ISIS course
Topics (tentative)
Paper(s) |
Betreuer |
Vortragender |
Nonlinear Dimensionality Reduction by Locally Linear Embedding link |
|
|
Gaussian Processes - A Replacement for Supervised Neural Networks? link |
|
|
Factor Graphs and the Sum-Product Algorithm link |
|
|
Gaussian Processes in Machine Learning link |
|
|
A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition link |
|
|
Decoding by Linear Programming link |
|
|
Self-organizing formation of topologically correct feature maps |
|
|
Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting link |
|
|
Expectation Propagation for approximate Bayesian inference link |
|
|
A new look at the statistical model identification link |
|
|
Error Correction via Linear Programming link |
|
|
A Global Geometric Framework for Nonlinear Dimensionality Reduction link |
|
|
An Introduction to MCMC for Machine Learning link |
|
|
Perspectives on Sparse Bayesian Learning link |
|
|
Induction of decision trees link |
|
|
A Fast Learning Algorithm for Deep Belief Nets link |
|
|
How to Use Expert Advice link |
|
|
A View of the EM Algorithm that Justifies Incremental, Sparse, and other Variants link |
|
|
Probabilistic Inference using Markov Chain Monte Carlo Methods link |
|
|
Model Selection Using the Minimum Description Length Principle link |
|
|
Hierarchical Mixtures of Experts and the EM Algorithm link |
|
|
Gaussian Processes in Reinforcement Learning link |
|
|
An introduction to variational methods for graphical models link |
|
|
Bla
Ensemble learning |
|
|
Spectral clustering |
|
|
Expectation propagation |
|
|
Hidden Markov Models (HMM) |
|
|
Variational methods |
|
|
Learning bounds |
|
|
Manifold learning |
|
|
Locally Linear Embedding (LLE) |
|
|
Random forests |
|
|
Compressed sensing |
|
|
Minimum description length (MDL) |
|
|
Markov Chain Monte Carlo (MCMC) |
|
|
Gaussian processes |
|
|
Deep belief networks |
|
|
Boosting |
|
|
Expectation Maximization (EM) |
|
|
Message passing |
|
|
Model selection |
|
|
Kalman filters |
|
|