Science Park Congress Center, Amsterdam - Thursday October 13, 2022 (google maps)

Event Description:

The Dutch Day on Optimization is the yearly in-person event related to the Dutch Seminar on Optimization (see website). The main purpose of the event is to bring together the Dutch Optimization community across the areas of operations research, computer science and discrete mathematics. The event will highlight the research of both national and international speakers.

The lectures will take place in the Turing room in the Amsterdam Science Park Congress Center. Registration, coffee breaks and evening drinks will take place in the Congress Center Foyer. Lunch will be served in the Newton room which is around the corner from the Turing room.


Registration is free: google form. Only register if you plan to attend the event in person. The registration deadline is Friday October 7th, but please register as early as possible to help us with planning and catering. The conference will also be livestreamed over zoom and online participants will be able to ask questions by voice or via the chat.

Zoom Link: livestream (will activate on the event day)

Workshop Schedule: Thursday, October 13

Time Location Description
10:25am - 10:58am Foyer Registration
10:58am-11:05am Turing Room Opening Remarks
11:05am - 12pm Turing Room Lars Rohwedder (Maastricht)
Recent Advances in Flow Time Scheduling (show abstract)

Flow time is one of the most natural metrics to optimize in scheduling, but algorithmically it can be notoriously difficult to handle. I will talk about some recent advances in this topic, focusing on two results by myself and co-authors: a PTAS for the sum of weighted flow times on a single machine and improved approximation guarantees for parallel unrelated machines. The first result is enabled by a study of structural properties of constraints in a natural ILP formulation and the second result relies on a novel connection to discrepancy theory.
12pm-1pm Newton Room Catered Lunch
1pm-2pm Turing Room Lightning Talks (show speakers)
2:00pm - 2:25pm Foyer Coffee Break
2:25pm-2:50pm Turing Room Renata Sotirov (Tilburg)
Integer Semidefinite Programming - a New Perspective (show abstract)

Integer semidefinite programming can be viewed as a generalization of integer programming where the vector variables are replaced by positive semidefinite integer matrix variables. The combination of positive semidefiniteness and integrality allows to formulate various optimization problems as integer semidefinite programs (ISDPs). Nevertheless, ISDPs have received attention only very recently.
In this talk we show how to extend the Chvàtal-Gomory (CG) cutting-plane procedure to ISDPs. We also show how to exploit CG cuts in a branch-and-cut framework for ISDPs. Finally, we demonstrate the practical strength of the CG cuts in our branch-and-cut algorithm. Our results provide a new perspective on ISDPs.
This is joint work with F. de Meijer
2:50pm-3:15pm Turing Room Guus Regts (UvA)
The independence polynomial: algorithms and complex zeros (show abstract)

Determining if a graph has an independent set of size \(k\) is a well known NP-hard problem. Clearly counting the number of independent sets of size \(k\) is at least as hard. More surprisingly perhaps is that determining the number of all independent sets is just as hard, even for graphs of maximum degree \(3\). In this talk I will survey some recent results about the hardness of *approximately* determining the number of independent sets and more generally of other evaluations of the generating function of the independent sets in a graph, also known as the independence polynomial. The location of the complex zeros of the independence polynomial play an important role.
3:15pm-3:40pm Turing Room Mathias Staudigl (Maastricht)
On the iteration complexity of first and second-order Hessian barrier algorithms for non-convex non-smooth conic optimization (show abstract)

A key problem in mathematical imaging, signal processing and computational statistics is the minimization of non-convex objective functions over conic domains, which are continuous but potentially non-smooth at the boundary of the feasible set. For such problems, this paper proposes a new family of first and second-order interior-point methods for non-convex and non-smooth conic constrained optimization problems, combining the Hessian-barrier method with quadratic and cubic regularization techniques. Our approach is based on a potential-reduction mechanism and attains a suitably defined class of approximate first- or second-order KKT points with worst-case iteration complexity \(O(\epsilon^{-2})\) (first-order) and \(O(\epsilon^{-3/2})\) (second-order), respectively. Based on these findings, we develop a new double loop path-following scheme attaining the same complexity, modulo adjusting constants. These complexity bounds are known to be optimal in the unconstrained case, and our work shows that they are upper bounds in the case with complicated constraints as well. A key feature of our methodology is the use of self-concordant barriers to construct strictly feasible iterates via a disciplined decomposition approach and without sacrificing on the iteration complexity of the method. To the best of our knowledge, this work is the first which achieves these worst-case complexity bounds under such weak conditions for general conic constrained optimization problems.
This is joint work with Pavel Dvurechensky (WIAS Berlin) and based on the paper:
3:40pm-4:05pm Foyer Coffee Break
4:05pm-5pm Turing Room Francis Bach (Paris)
Sum-of-Squares Relaxations for Information Theory and Variational Inference (online talk) (show abstract)

We consider extensions of the Shannon relative entropy, referred to as \(f\)-divergences. Three classical related computational problems are typically associated with these divergences: (a) estimation from moments, (b) computing normalizing integrals, and (c) variational inference in probabilistic models. These problems are related to one another through convex duality, and for all them, there are many applications throughout data science, and we aim for computationally tractable approximation algorithms that preserve properties of the original problem such as potential convexity or monotonicity. In order to achieve this, we derive a sequence of convex relaxations for computing these divergences from non-centered covariance matrices associated with a given feature vector: starting from the typically non-tractable optimal lower-bound, we consider an additional relaxation based on ``sums-of-squares'', which is is now computable in polynomial time as a semidefinite program, as well as further computationally more efficient relaxations based on spectral information divergences from quantum information theory (see for details).
5pm-6pm Foyer Drinks


Amsterdam Science Park Congress Center, Science Park 125, 1098 XG Amsterdam (google maps).


Daniel Dadush (CWI), Jesper Nederlof (Utrecht)