Size: 5033
Comment:
|
Size: 9950
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 40: | Line 40: |
Bla || Ensemble learning <<BR>> Induction of decision trees [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.3624&rep=rep1&type=pdf|link]] <<BR>> Quinlan, R., 1986 Hierarchical Mixtures of Experts and the EM Algorithm [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.52.7391&rep=rep1&type=pdf|link]] <<BR>> Jordan, M. I. and Jacobs, R. A., 1994 || || || || Spectral clustering <<BR>> A tutorial on spectral clustering [[http://www.stanford.edu/class/ee378B/papers/luxburg-spectral.pdf|link]] <<BR>> Von Luxburg, U., 2007 || || || || Expectation propagation <<BR>> Expectation Propagation for approximate Bayesian inference [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.1319&rep=rep1&type=pdf|link]] <<BR>> Minka, T. P., 2001 || || || || Hidden Markov Models (HMM) <<BR>> A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.131.2084&rep=rep1&type=pdf|link]] <<BR>> Rabiner, L. R., 1989 A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains <<BR>> Baum, L., Petrie, T., Soules, G. and Weiss, N., 1970 || || || || Variational methods <<BR>> An introduction to variational methods for graphical models [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.4999&rep=rep1&type=pdf|link]] <<BR>> Jordan, M. I., Ghahramani, Z. and Jaakkola, T. S., 1999 || || || || Learning bounds <<BR>> Tutorial on practical prediction theory for classification [[http://jmlr.csail.mit.edu/papers/volume6/langford05a/langford05a.pdf|link]] <<BR>> Langford, J., 2006 || || || || Manifold learning <<BR>> Laplacian eigenmaps for dimensionality reduction and data representation <<BR>> Belkin, M. and Niyogi, P., 2003 || || || || Locally Linear Embedding (LLE) <<BR>> Nonlinear Dimensionality Reduction by Locally Linear Embedding [[http://www.sciencemag.org/content/vol290/issue5500/|link]] <<BR>> Roweis, S. T. and Saul, L. K., 2000 || || || || Random forests <<BR>> Random forests <<BR>> Breiman, L., 2001 || || || || Compressed sensing <<BR>> Decoding by Linear Programming [[http://arxiv.org/pdf/math/0502327|link]] <<BR>> Candes, and Tao, , 2005 Error Correction via Linear Programming [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.2255&rep=rep1&type=pdf|link]] <<BR>> Candes, , Rudelson, , Tao, and Vershynin, , 2005 || || || || Minimum description length (MDL) <<BR>> Model Selection Using the Minimum Description Length Principle [[http://www.amstat.org/publications/tas/Bryant.htm|link]] <<BR>> Bryant, P. G. and Cordero-Brana, O. I., 2000 || || || || Markov Chain Monte Carlo (MCMC) <<BR>> An Introduction to MCMC for Machine Learning [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.7133&rep=rep1&type=pdf|link]] <<BR>> Andrieu, , de Freitas, , Doucet, and Jordan, , 2003 Probabilistic Inference using Markov Chain Monte Carlo Methods [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36.9055&rep=rep1&type=pdf|link]] <<BR>> Neal, R. M., 1993 || || || || Gaussian processes <<BR>> Gaussian Processes - A Replacement for Supervised Neural Networks? [[ftp://wol.ra.phy.cam.ac.uk/pub/mackay/gp.ps.gz|link]] <<BR>> MacKay, D. J. C., 1997 Gaussian Processes in Machine Learning [[http://dx.doi.org/10.1007/978-3-540-28650-9_4|link]] <<BR>> Rasmussen, C. E., 2003 || || || || Deep belief networks <<BR>> A Fast Learning Algorithm for Deep Belief Nets [[http://neco.mitpress.org/cgi/content/abstract/18/7/1527|link]] <<BR>> Hinton, G. E., Osindero, S. and Teh, Y. W., 2006 || || || || Boosting <<BR>> Experiments with a new boosting algorithm [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.4143&rep=rep1&type=pdf|link]] <<BR>> Freund, Y. and Schapire, R., 1996 Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.7436&rep=rep1&type=pdf|link]] <<BR>> Friedman, J., Hastie, T. and Tibshirani, R., 2000 || || || || Expectation Maximization (EM) <<BR>> Maximum likelihood from incomplete data via the EM algorithm <<BR>> Dempster, A., Laird, N. and Rubin, D., 1977 A View of the EM Algorithm that Justifies Incremental, Sparse, and other Variants [[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.2557|link]] <<BR>> Neal, R. and Hinton, G., 1998 || || || || Message passing <<BR>> Factor Graphs and the Sum-Product Algorithm <<BR>> Kschischang, , Frey, and Loeliger, , 2001 || || || || Model selection <<BR>> A new look at the statistical model identification [[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1100705|link]] <<BR>> Akaike, H., 1974 || || || || Kalman filters <<BR>> A new approach to linear filtering and prediction problems <<BR>> Kalman, R. and others, , 1960 || || || |
Block-Seminar "Classical Topics in Machine Learning"
Termine und Informationen
Erster Termin für Themenvergabe |
Mittwoch, 16.11.2011, 10:00-12:00 Uhr, Raum FR 6046 |
Verantwortlich |
|
Ansprechtpartner(in) |
|
Sprechzeiten |
Nach Vereinbarung |
Sprache |
Englisch |
Anrechenbarkeit |
Wahlpflicht LV im Modul Maschinelles Lernen I (Informatik M.Sc.) |
All information can be found in the ISIS course
Topics (tentative)
Paper(s) |
Betreuer |
Vortragender |
Nonlinear Dimensionality Reduction by Locally Linear Embedding link |
|
|
Gaussian Processes - A Replacement for Supervised Neural Networks? link |
|
|
Factor Graphs and the Sum-Product Algorithm link |
|
|
Gaussian Processes in Machine Learning link |
|
|
A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition link |
|
|
Decoding by Linear Programming link |
|
|
Self-organizing formation of topologically correct feature maps |
|
|
Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting link |
|
|
Expectation Propagation for approximate Bayesian inference link |
|
|
A new look at the statistical model identification link |
|
|
Error Correction via Linear Programming link |
|
|
A Global Geometric Framework for Nonlinear Dimensionality Reduction link |
|
|
An Introduction to MCMC for Machine Learning link |
|
|
Perspectives on Sparse Bayesian Learning link |
|
|
Induction of decision trees link |
|
|
A Fast Learning Algorithm for Deep Belief Nets link |
|
|
How to Use Expert Advice link |
|
|
A View of the EM Algorithm that Justifies Incremental, Sparse, and other Variants link |
|
|
Probabilistic Inference using Markov Chain Monte Carlo Methods link |
|
|
Model Selection Using the Minimum Description Length Principle link |
|
|
Hierarchical Mixtures of Experts and the EM Algorithm link |
|
|
Gaussian Processes in Reinforcement Learning link |
|
|
An introduction to variational methods for graphical models link |
|
|
Bla
Ensemble learning |
|
|
Spectral clustering |
|
|
Expectation propagation |
|
|
Hidden Markov Models (HMM) |
|
|
Variational methods |
|
|
Learning bounds |
|
|
Manifold learning |
|
|
Locally Linear Embedding (LLE) |
|
|
Random forests |
|
|
Compressed sensing |
|
|
Minimum description length (MDL) |
|
|
Markov Chain Monte Carlo (MCMC) |
|
|
Gaussian processes |
|
|
Deep belief networks |
|
|
Boosting |
|
|
Expectation Maximization (EM) |
|
|
Message passing |
|
|
Model selection |
|
|
Kalman filters |
|
|