RSP

Research Semester Programme Machine Learning TheoryLaunch Lecture

Overview

With this "Launch Lecture", we kick off the Machine Learning Theory semester programme, which runs in Spring 2023. We are delighted to host an afternoon with two distinguished researchers speaking about their advances in our understanding of machine learning. These research-level lectures are intended for a non-specialist audience. The lectures are freely accessible. Please register here.

The general theme of the afternoon is "The Nature of Information and its Active Acquisition". Our two speakers will illuminate philosophical and technical aspects of machine learning. For more details see the abstracts below. We are looking forward to a strong program and we hope to welcome you on the 14th.

This is the first lecture event of the ML Theory semester programme, with a second lecture following on April 12.

Schedule

DateTimeEvent
February 14 202314:00Bob Williamson
Foundations of Machine Learning Systems
February 14 202315:00Break
February 14 202315:30Emilie Kaufmann
A Tale of Two Non-parametric Bandit Problems
February 14 202316:30Reception

Speakers

Bob Williamson

Bob Williamson (University of Tübingen) is Professor of Foundations of Machine Learning Systems at the University of Tübingen, where he is also a member of the Tübingen AI centre and the Cluster of Excellence (Machine Learning in the Sciences). Previously he was at the ANU and NICTA in Australia. He is a fellow of the Australian Academy of Science.

Foundations of Machine Learning Systems [slides]

Abstract: I will present some new insights into some foundational assumptions about machine learning systems, including why we might want to replace the expectation in our definition of generalisation error, why independence is intrinsically relative and how it is intimately related to fairness, why the data we ingest might not even have a probability distribution, and what one might do in such cases, and how we have been (perhaps unwittingly) working with these more exotic notions for some time already.


Emilie Kaufmann

Emilie Kaufmann (CNRS, Univ. Lille) is a CNRS researcher at the University of Lille, France, and a member of the Inria Scool team. She got a PhD in statistics from Telecom ParisTech in 2014 and since then has studied different types of sequential decision making problems. In particular, she extensively worked on multi-armed bandit problems. Her recent research interests involve the applications of bandits to (sequential) clinical trials, and investigating the sample complexity of more general reinforcement learning problems.

A Tale of Two Non-parametric Bandit Problems

Abstract: In a bandit model an agent sequentially collects samples (rewards) from different distributions, called arms, in order to achieve some objective related to learning or playing the best arms. Depending on the application, different assumptions can be made on these distributions, from Bernoulli (e.g., to model success/failure of a treatment) to complex multi-modal distributions (e.g. to model the yield of a crop in agriculture). In this talk, we will present non-parametric algorithms which adapt optimally to the actual distributions of the arms, assuming that they are bounded. We will first show the robustness of a Non Parametric Thompson Sampling strategy to a risk-averse performance metric. Then, we will discuss how the algorithm can be modified to tackle pure exploration objectives, bringing new insights on so-called Top Two algorithms.