Size: 2914
Comment:
|
Size: 2799
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 29: | Line 29: |
* Density estimation: kernel density estimation, Parzen windows, parametric density/MaxLikelihood | * Density estimation: kernel density estimation, Parzen windows, parametric density |
Line 67: | Line 67: |
* Cross references to previous methods: ridge regression, PCA, SVM | |
Line 73: | Line 72: |
* Motivation with Logistic regression | * Motivation with logistic regression |
Line 76: | Line 75: |
* Famous Conv nets (imagenet winners): AlexNet, GoogleNet, ResNet | * Famous Conv net architectures: AlexNet, GoogleNet, ResNet etc. |
Line 79: | Line 78: |
* Practical recommendations for Training of DNNs (following e.g. Bengio's 2012 paper), hyperparameters | * Practical recommendations for Training of DNNs, hyperparameter tuning |
Beginners Workshop Machine Learning
From:
2018-09-03
To:
2018-09-14
Exam:
2018-09-24
Organisation:
Seulki Yeom: yeom@tu-berlin.de, Philipp Seegerer: philipp.seegerer@tu-berlin.de, David Lassner: lassner@tu-berlin.de
Language
English
Application deadline
June 15th, 2018
Enrollment / Limited number of participants
If you intend to participate, please send an e-mail to lassner@tu-berlin.de with title "Beginners Workshop Enrollment" and this text:
Name: Your name Matr.Nr: Your student ID (Matrikelnummer) Degree: The degree you are enrolled in and want to use this course for. TU student: Yes/No (Are you a enrolled as a regular student at TU Berlin?) Other student: If you are not a regular student, please write your status. ML1: Yes/No (Did you take the course Machine Learning 1 at TU Berlin?) Other ML course: If you did not take ML1 at TU Berlin, please write if you took any equivalent course.
Participation spots are mostly assigned on a random basis. Please keep in mind that auditing students and Nebenhörer can only participate if less than the maximum number of regular TU students register for the course (http://www.studsek.tu-berlin.de/menue/studierendenverwaltung/gast_und_nebenhoererschaft/parameter/en/).
(temporary) Workshop Lecture topics are:
1. Clustering, mixtures, density estimation
- Density estimation: kernel density estimation, Parzen windows, parametric density
- K means clustering
- Gaussian mixture models, EM algorithm
- Curse of dimensionality
2. Manifold learning
- LLE
- Embeddings (RBF)
- Multidimensional scaling
- tSNE
3. Bayesian Methods
- What is learning?
- Frequentist vs Bayes
- Bayes rule
- Naive Bayes
- Bayesian linear regression
- Bayesian/Akaike information criterion, Occam's razor
4. Classical and linear methods
- Matrix factorization
- Logistic regression
- Regularization, Lasso, Ridge regression
- Fisher's Linear discriminant
- Gradient descent
- Decision boundaries
5. Support Vector Machine
- Linear SVM
- Linear separability, maximum margin and soft margin
- Duality in optimization, KKT conditions
- SVM for regression
- Multi-class SVM
- Applications
6. Kernels
- Feature transformations
- Kernel trick
- Nadaraya-Watson kernel regression
7. Neural Networks
- Rosenblatt's Perceptron
- Multi layer perceptron
- Motivation with logistic regression
- Backpropagation, (Stochastic) (Minibatch) gradient descent
- Convolutional NNs
Famous Conv net architectures: AlexNet, GoogleNet, ResNet etc.
- Recurrent NNs
- Applications
- Practical recommendations for Training of DNNs, hyperparameter tuning