Course description

This is an introductory course in machine learning, covering fundamental topics such as supervised learning, unsupervised learning, Bayesian inference and reinforcement learning. The course objective is not only to learn how to apply common machine learning algorithms, but to provide students with sufficient knowledge to carry out research in machine learning or a related field. For this reason, the course describes the basic mathematical formulations that underlie machine learning.

Learning outcomes

  • Understand the mathematical principles that form the basis of machine learning.
  • Solve basic mathematical exercises related to machine learning theory.
  • Recognize the type of learning problem and select appropriate algorithms.
  • Implement machine learning algorithms in a common programming language and test them on actual learning problems.
  • Evaluate and interpret the outcome of learning on a given problem and compare the outcome for different algorithms.
  • Select appropriate values of hyperparameters through validation.


The student is expected to have taken the following (or similar) courses:

  • Linear Algebra
  • Calculus
  • Probability theory/statistics


The evaluation will consist of a set of homework and programming exercises, and a
final exam. The grade of the course will be calculated as follows:

Final Grade = 0.3 * Homework + 0.3 * Labs + 0.4 * Exam,

where Homework and Labs are the averages over the homework and programming
exercises, respectively, and Exam is the grade of the final exam. To pass the course, the students need to hand in all homeworks and all labs and a minimum grade of 5 is required in the three parts.


Lecture 1 Introduction to machine learning (Anders)
Lecture 2 Linear models (Anders)
Lecture 3 The bias-variance tradeoff and overfitting (Anders and Vicenç) Practical session #1.
Lecture 4 Optimization and gradient descent (Anders)
Lecture 5 Common algorithms for supervised learning (Anders)
Lecture 6 Unsupervised learning, clustering and principal component analysis  (Anders and Vicenç)  Practical session #2.
Lecture 7 Bayesian machine learning (Vicenç)
Lecture 8 Probabilistic graphical models (Vicenç)
Lecture 9 Reinforcement learning 1 (Gergely)
Lecture 10 Reinforcement learning 2 (Gergely)


Y. Abu-Mostafa, M. Magdon-Ismail & H-T Lin: Learning from Data.
C. Bishop: Pattern Recognition and Machine Learning.
D. MacKay: Information Theory, Inference, and Learning Algorithms.
R. Sutton & A. Barto: Reinforcement Learning: An Introduction.