Differences between revisions 1 and 10 (spanning 9 versions)
Revision 1 as of 2018-04-17 14:29:22
Size: 335
Editor: DavidLassner
Comment:
Revision 10 as of 2018-06-11 15:30:16
Size: 2799
Comment:
Deletions are marked like this. Additions are marked like this.
Line 6: Line 6:
 || '''Organisation:''' || Seulki Yeom: seulki.yeom@tu-berlin.de, Philipp Seegerer philipp.seegerer@campus.tu-berlin.de, David Lassner lassner@tu-berlin.de ||  || '''Organisation:''' || Seulki Yeom: yeom@tu-berlin.de, Philipp Seegerer: philipp.seegerer@tu-berlin.de, David Lassner: lassner@tu-berlin.de ||
Line 8: Line 8:
 ||'''Application deadline'''|| June 15th, 2018 ||


== Enrollment / Limited number of participants ==

If you intend to participate, please send an e-mail to lassner@tu-berlin.de with title "Beginners Workshop Enrollment" and this text:
{{{
Name: Your name
Matr.Nr: Your student ID (Matrikelnummer)
Degree: The degree you are enrolled in and want to use this course for.
TU student: Yes/No (Are you a enrolled as a regular student at TU Berlin?)
Other student: If you are not a regular student, please write your status.
ML1: Yes/No (Did you take the course Machine Learning 1 at TU Berlin?)
Other ML course: If you did not take ML1 at TU Berlin, please write if you took any equivalent course.
}}}

Participation spots are mostly assigned on a random basis. Please keep in mind that auditing students and Nebenhörer can only participate if less than the maximum number of regular TU students register for the course (http://www.studsek.tu-berlin.de/menue/studierendenverwaltung/gast_und_nebenhoererschaft/parameter/en/).

(temporary) Workshop Lecture topics are:

1. Clustering, mixtures, density estimation
 * Density estimation: kernel density estimation, Parzen windows, parametric density
 * K means clustering
 * Gaussian mixture models, EM algorithm
 * Curse of dimensionality

2. Manifold learning
 * LLE
 * Embeddings (RBF)
 * Multidimensional scaling
 * tSNE

3. Bayesian Methods
 * What is learning?
 * Frequentist vs Bayes
 * Bayes rule
 * Naive Bayes
 * Bayesian linear regression
 * Bayesian/Akaike information criterion, Occam's razor

4. Classical and linear methods
 * Matrix factorization
 * Logistic regression
 * Regularization, Lasso, Ridge regression
 * Fisher's Linear discriminant
 * Gradient descent
 * Decision boundaries

5. Support Vector Machine
 * Linear SVM
 * Linear separability, maximum margin and soft margin
 * Duality in optimization, KKT conditions
 * SVM for regression
 * Multi-class SVM
 * Applications

6. Kernels
 * Feature transformations
 * Kernel trick
 * Nadaraya-Watson kernel regression

7. Neural Networks
 * Rosenblatt's Perceptron
 * Multi layer perceptron
 * Motivation with logistic regression
 * Backpropagation, (Stochastic) (Minibatch) gradient descent
 * Convolutional NNs
 * Famous Conv net architectures: AlexNet, GoogleNet, ResNet etc.
 * Recurrent NNs
 * Applications
 * Practical recommendations for Training of DNNs, hyperparameter tuning

Beginners Workshop Machine Learning

Enrollment / Limited number of participants

If you intend to participate, please send an e-mail to lassner@tu-berlin.de with title "Beginners Workshop Enrollment" and this text:

Name: Your name
Matr.Nr: Your student ID (Matrikelnummer)
Degree: The degree you are enrolled in and want to use this course for.
TU student: Yes/No (Are you a enrolled as a regular student at TU Berlin?)
Other student: If you are not a regular student, please write your status.
ML1: Yes/No (Did you take the course Machine Learning 1 at TU Berlin?)
Other ML course: If you did not take ML1 at TU Berlin, please write if you took any equivalent course.

Participation spots are mostly assigned on a random basis. Please keep in mind that auditing students and Nebenhörer can only participate if less than the maximum number of regular TU students register for the course (http://www.studsek.tu-berlin.de/menue/studierendenverwaltung/gast_und_nebenhoererschaft/parameter/en/).

(temporary) Workshop Lecture topics are:

1. Clustering, mixtures, density estimation

  • Density estimation: kernel density estimation, Parzen windows, parametric density
  • K means clustering
  • Gaussian mixture models, EM algorithm
  • Curse of dimensionality

2. Manifold learning

  • LLE
  • Embeddings (RBF)
  • Multidimensional scaling
  • tSNE

3. Bayesian Methods

  • What is learning?
  • Frequentist vs Bayes
  • Bayes rule
  • Naive Bayes
  • Bayesian linear regression
  • Bayesian/Akaike information criterion, Occam's razor

4. Classical and linear methods

  • Matrix factorization
  • Logistic regression
  • Regularization, Lasso, Ridge regression
  • Fisher's Linear discriminant
  • Gradient descent
  • Decision boundaries

5. Support Vector Machine

  • Linear SVM
  • Linear separability, maximum margin and soft margin
  • Duality in optimization, KKT conditions
  • SVM for regression
  • Multi-class SVM
  • Applications

6. Kernels

  • Feature transformations
  • Kernel trick
  • Nadaraya-Watson kernel regression

7. Neural Networks

  • Rosenblatt's Perceptron
  • Multi layer perceptron
  • Motivation with logistic regression
  • Backpropagation, (Stochastic) (Minibatch) gradient descent
  • Convolutional NNs
  • Famous Conv net architectures: AlexNet, GoogleNet, ResNet etc.

  • Recurrent NNs
  • Applications
  • Practical recommendations for Training of DNNs, hyperparameter tuning

IDA Wiki: Main/SS18_BeginnersWorkshop (last edited 2018-08-20 13:41:28 by PhilippSeegerer)