Fall 2019 (weeks 37 - 51). Lectures on Tuesday 14:00-16:45 at the UvA, SP C0.110.
Machine learning is one of the fastest growing areas of science, with far-reaching applications. In this course we focus on the fundamental ideas, theoretical frameworks, and rich array of mathematical tools and techniques that power machine learning. The course covers the core paradigms and results in machine learning theory with a mix of probability and statistics, combinatorics, information theory, optimization and game theory.
During the course you will learn to
This course strongly focuses on theory. (Good applied master level courses on machine learning are widely available, for example here, here and here). We will cover statistical learning theory including PAC learning, VC dimension, Rademacher complexity and Boosting, as well as online learning including prediction with expert advice, online convex optimisation and bandits.
This course is offered as part of the MasterMath program. To participate for credit, sign up here. We use the MasterMath ELO for submitting homework, forum discussions, and receiving grades.
The only prerequisites are
all at the bachelor level. The course does require general 'mathematical maturity' though, in particular the ability to combine insights from all three fields when proving theorems.
We offer weekly homework sets whose solution requires constructing proofs. This course will not include any programming or data.
The grade will be composed as follows.
The average of midterm and final exam grades has to be at least 5.0 to pass the course.
It is strongly encouraged to solve and submit your weekly homework in small teams. Exams are personal.
There will be a retake possibility for either/both exams, which are 60% of the grade.
When | What | Who |
Tue 10 Sep | Introduction. Statistical learning. Halfspaces. PAC learnability for finite hyp. classes, realizable case. Chapters 1 and 2 in [1]. | Rianne |
Tue 17 Sep | PAC learnability for finite hyp. classes, agnostic case. Uniform convergence. Chapters 3 and 4 in [1]. | Rianne |
Tue 24 Sep | Infinite classes. VC Dimension part 1. Chapter 6.1-6.3 in [1]. | Rianne |
Tue 1 Oct | VC Dimendion part 2. Fundamental theorem of PAC learning. Sauer's Lemma. Chapter 6 in [1]. | Rianne |
Tue 8 Oct | Proof of Fund.Th of PAC Learning. Nonuniform Learnability, SRM, Other notions of Learning. Chapter 7 in [1]. | Rianne |
Tue 15 Oct | Rademacher Complexity. Chapters 26 in [1]. | Peter |
Tue 22 Oct | Midterm exam. | |
Tue 29 Oct | Full Information Online Learning (Experts). Notes. | Wouter |
Tue 5 Nov | Bandits. UCB and EXP3. Chapters 2.2 and 3.1 in [3]. | Wouter |
Tue 12 Nov | Non-stationary environments. Tracking. Notes. | Wouter |
Tue 19 Nov | Online Convex Optimization, Sections 3.1 and 3.3 in [2] and Notes. | Wouter |
Tue 26 Nov | Exp-concavity. Online Newton Step. Chapter 4 in [2] and Notes. | Wouter |
Tue 3 Dec | Vovk mixability. Boosting. AdaBoost. Notes and Chapter 10 in [1]. | Wouter |
Tue 10 Dec | Log-loss prediction. Normalised Maximum Likelihood. Non-parametric classes. | Peter |
Tue 17 Dec | Online learning for solving two-player zero-sum games. Accelerated convex optimisation by means of a reduction to solving games. Notes. | Wouter |
Tue 7 Jan | Final exam. | |
Tue 11 Feb | Retake exam(s). |
The dependency graph of the online learning component of the course
Including all proofs, unless stated otherwise.
Intro to statistical and game-theoretic learning. PAC learnability for finite hyp. classes, realisable case. (Chapter 1+2). Agnostic PAC learning and learning via uniform convergence. (Chapter 3+4) No-Free Lunch (no proof). VC-Dimension. Half-spaces. Fundamental theorem of PAC learning. Symmetrisation (no proof). Sauer's lemma. (Chapter 5.1 and 6 (not: 6.5.2); proofs as in the lecture). Non-uniform learnability, SRM (7.1 and 7.2). Rademacher complexity: only what is discussed in the lecture, not the homework. To be precise:
If below we say "algorithm and regret bound" we might question you on the setting, its main algorithm and (do computations involving) the corresponding regret bound. If we add "analysis" you should be able to reprove the regret bound at the exam.
If below we say 'you should read and understand' a section, this means that:
We will not ask you about VC dimension or PAC learning. That was covered by the mid-term. We will also not ask you about the final lecture about zero-sum games/accelerated optimisation.
We will make use of the following sources