Differences between revisions 12 and 13
Revision 12 as of 2018-08-20 13:38:37
Size: 2846
Comment:
Revision 13 as of 2018-08-20 13:41:28
Size: 2915
Comment:
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
 || '''Lecture time:''' || 8:45 - 12:00, 13:30 - 17:00 (approx.) ||

Beginners Workshop Machine Learning

Enrollment / Limited number of participants

If you intend to participate, please send an e-mail to lassner@tu-berlin.de with title "Beginners Workshop Enrollment" and this text:

Name: Your name
Matr.Nr: Your student ID (Matrikelnummer)
Degree: The degree you are enrolled in and want to use this course for.
TU student: Yes/No (Are you a enrolled as a regular student at TU Berlin?)
Other student: If you are not a regular student, please write your status.
ML1: Yes/No (Did you take the course Machine Learning 1 at TU Berlin?)
Other ML course: If you did not take ML1 at TU Berlin, please write if you took any equivalent course.

Participation spots are mostly assigned on a random basis. Please keep in mind that auditing students and Nebenhörer can only participate if less than the maximum number of regular TU students register for the course (http://www.studsek.tu-berlin.de/menue/studierendenverwaltung/gast_und_nebenhoererschaft/parameter/en/).

(temporary) Workshop Lecture topics are:

1. Clustering, mixtures, density estimation

  • Density estimation: kernel density estimation, Parzen windows, parametric density
  • K means clustering
  • Gaussian mixture models, EM algorithm
  • Curse of dimensionality

2. Manifold learning

  • LLE
  • Embeddings (RBF)
  • Multidimensional scaling
  • tSNE

3. Bayesian Methods

  • What is learning?
  • Frequentist vs Bayes
  • Bayes rule
  • Naive Bayes
  • Bayesian linear regression
  • Bayesian/Akaike information criterion, Occam's razor

4. Classical and linear methods

  • Matrix factorization
  • Logistic regression
  • Regularization, Lasso, Ridge regression
  • Fisher's Linear discriminant
  • Gradient descent
  • Decision boundaries

5. Support Vector Machine

  • Linear SVM
  • Linear separability, maximum margin and soft margin
  • Duality in optimization, KKT conditions
  • SVM for regression
  • Multi-class SVM
  • Applications

6. Kernels

  • Feature transformations
  • Kernel trick
  • Nadaraya-Watson kernel regression

7. Neural Networks

  • Rosenblatt's Perceptron
  • Multi layer perceptron
  • Motivation with logistic regression
  • Backpropagation, (Stochastic) (Minibatch) gradient descent
  • Convolutional NNs
  • Famous Conv net architectures: AlexNet, GoogleNet, ResNet etc.

  • Recurrent NNs
  • Applications
  • Practical recommendations for Training of DNNs, hyperparameter tuning

IDA Wiki: Main/SS18_BeginnersWorkshop (last edited 2018-08-20 13:41:28 by PhilippSeegerer)