Machine Learning Theory

Fall 2018 (weeks 37 - 51). Lectures on Tuesday 10:15-13:00 at the UvA.

Aim | Prerequisites | Lecturers | Mode | Schedule | Material

Aim

Machine learning is one of the fastest growing areas of science, with far-reaching applications. In this course we focus on the fundamental ideas, theoretical frameworks, and rich array of mathematical tools and techniques that power machine learning. The course covers the core paradigms and results in machine learning theory with a mix of probability and statistics, combinatorics, information theory, optimization and game theory.

During the course you will learn to

This course strongly focuses on theory. (Good applied master level courses on machine learning are widely available, for example here, here and here). We will cover statistical learning theory including PAC learning, VC dimension, Rademacher complexity and Boosting, as well as online learning including prediction with expert advice, online convex optimisation and bandits.

MasterMath Website

This course is offered as part of the MasterMath program. To participate for credit, sign up here. We use the MasterMath ELO for submitting homework, forum discussions, and receiving grades.

Prerequisites

The only prerequisites are

all at the bachelor level. The course does require general 'mathematical maturity' though, in particular the ability to combine insights from all three fields when proving theorems.

We offer weekly homework sets whose solution requires constructing proofs. This course will not include any programming or data.

Lecturers

Mode

The grade will be composed as follows.

It is strongly encouraged to solve and submit your weekly homework in small teams. Exams are personal.

There will be a retake possibility for the exams, which are 60% of the grade.

Schedule

When What Who
Tue 11 Sep Introduction. Statistical learning. Halfspaces. PAC learnability for finite hyp. classes, realizable case. Chapters 1 and 2 in [1]. Rianne
Tue 18 Sep PAC learnability for finite hyp. classes, agnostic case. Uniform convergence. Chapters 3 and 4 in [1]. Rianne
Tue 25 Sep Infinite classes. VC Dimension part 1. Chapter 6.1-6.3 in [1]. Rianne
Tue 2 Oct VC Dimendion part 2. Fundamental theorem of PAC learning. Sauer's Lemma. Chapter 6 in [1]. Rianne
Tue 9 Oct Proof of Fund.Th of PAC Learning. Nonuniform Learnability, SRM, Other notions of Learning. Chapter 7 in [1]. Rianne
Tue 16 Oct Rademacher Complexity. Chapters 26 in [1]. Peter
Tue 23 Oct Midterm exam.
Tue 30 Oct Full Information Online Learning (Experts). Notes. Wouter
Tue 6 Nov Bandits. UCB and EXP3. Chapters 2.2 and 3.1 in [3]. Wouter
Tue 13 Nov Non-stationary environments. Tracking. Notes. Wouter
Tue 20 Nov Online Convex Optimization, Sections 3.1 and 3.3 in [2] and Notes. Wouter
Tue 27 Nov Exp-concavity. Online Newton Step. Vovk mixability. Chapter 4 in [2] and Notes. Wouter
Tue 4 Dec Log-loss prediction. Normalised Maximum Likelihood. Non-parametric classes. Peter
Tue 11 Dec Boosting. AdaBoost. Chapter 10 in [1]. Wouter
Tue 18 Dec Online learning for solving two-player zero-sum games. Accelerated convex optimisation by means of a reduction to solving games. Notes and this paper. Wouter
Tue Jan 8 Final exam. 10:00-13:00.

The dependency graph of the second half

graph

Exams

Final

The final exam will test you on the online learning component of the class. Below is the list of topics. The weekly homework prepares you for the exam.

If below we say "algorithm and regret bound" we might question you on the setting, its main algorithm and (do computations involving) the corresponding regret bound. If we add "analysis" you should be able to reprove the regret bound at the exam.

If below we say 'you should read and understand' a section, this means that:

Topics

We will not ask you about VC dimension or PAC learning. That was covered by the mid-term. We will also not ask you about the final lecture about zero-sum games/accelerated optimisation.

Material

We will make use of the following sources