Science Park Congress Center, Amsterdam  Thursday October 13, 2022 (google maps)
Event Description:
The Dutch Day on Optimization is the yearly inperson event related to the
Dutch Seminar on Optimization (see website).
The main purpose of the event is to bring together the Dutch Optimization
community across the areas of operations research, computer science and
discrete mathematics. The event will highlight the research of both national
and international speakers.
The lectures will take place in the Turing room in the Amsterdam Science Park
Congress Center. Registration, coffee breaks and evening drinks will take
place in the Congress Center Foyer. Lunch will be served in the Newton room
which is around the corner from the Turing room.
Registration:
Registration is free: google
form. Only register if you plan to attend the event in person. The
registration deadline is Friday October 7th, but please register as early as
possible to help us with planning and catering. The conference will also be
livestreamed over zoom and online participants will be able to ask questions
by voice or via the chat.
Zoom Link: livestream (will activate on the event day)
Workshop Schedule: Thursday, October 13
Time 
Location 
Description 
10:25am  10:58am 
Foyer 
Registration 
10:58am11:05am 
Turing Room 
Opening Remarks

11:05am  12pm 
Turing Room 
Lars Rohwedder (Maastricht)
Recent Advances in Flow Time Scheduling
(show
abstract)
Flow time is one of the most natural metrics to optimize in scheduling, but algorithmically it can be notoriously difficult to handle. I will talk about some recent advances in this topic, focusing on two results by myself and coauthors: a PTAS for the sum of weighted flow times on a single machine and improved approximation guarantees for parallel unrelated machines. The first result is enabled by a study of structural properties of constraints in a natural ILP formulation and the second result relies on a novel connection to discrepancy theory.

12pm1pm 
Newton Room 
Catered Lunch 
1pm2pm 
Turing Room 
Lightning Talks (show
speakers)

2:00pm  2:25pm 
Foyer 
Coffee Break 
2:25pm2:50pm 
Turing Room 
Renata Sotirov (Tilburg)
Integer Semidefinite Programming  a New Perspective
(show
abstract)
Integer semidefinite programming can be viewed as a generalization of integer programming where the vector variables are replaced by positive semidefinite integer matrix variables. The combination of positive semidefiniteness and integrality allows to formulate various optimization problems as integer semidefinite programs (ISDPs). Nevertheless, ISDPs have received attention only very recently.
In this talk we show how to extend the ChvàtalGomory (CG) cuttingplane procedure to ISDPs. We also show how to exploit CG cuts in a branchandcut framework for ISDPs. Finally, we demonstrate the practical strength of the CG cuts in our branchandcut algorithm. Our results provide a new perspective on ISDPs.
This is joint work with F. de Meijer

2:50pm3:15pm 
Turing Room 
Guus Regts (UvA)
The independence polynomial: algorithms and complex zeros
(show
abstract)
Determining if a graph has an independent set of size \(k\) is a well
known NPhard problem. Clearly counting the number of independent sets of
size \(k\) is at least as hard. More surprisingly perhaps is that determining
the number of all independent sets is just as hard, even for graphs of
maximum degree \(3\). In this talk I will survey some recent results about the
hardness of *approximately* determining the number of independent sets and
more generally of other evaluations of the generating function of the
independent sets in a graph, also known as the independence polynomial. The
location of the complex zeros of the independence polynomial play an
important role.

3:15pm3:40pm 
Turing Room 
Mathias Staudigl (Maastricht)
On the iteration complexity of first and secondorder Hessian barrier algorithms for nonconvex nonsmooth conic optimization
(show
abstract)
A key problem in mathematical imaging, signal processing and computational statistics is the minimization of nonconvex objective functions over conic domains, which are continuous but potentially nonsmooth at the boundary of the feasible set. For such problems, this paper proposes a new family of first and secondorder interiorpoint methods for nonconvex and nonsmooth conic constrained optimization problems, combining the Hessianbarrier method with quadratic and cubic regularization techniques. Our approach is based on a potentialreduction mechanism and attains a suitably defined class of approximate first or secondorder KKT points with worstcase iteration complexity \(O(\epsilon^{2})\) (firstorder) and \(O(\epsilon^{3/2})\) (secondorder), respectively. Based on these findings, we develop a new double loop pathfollowing scheme attaining the same complexity, modulo adjusting constants. These complexity bounds are known to be optimal in the unconstrained case, and our work shows that they are upper bounds in the case with complicated constraints as well. A key feature of our methodology is the use of selfconcordant barriers to construct strictly feasible iterates via a disciplined decomposition approach and without sacrificing on the iteration complexity of the method. To the best of our knowledge, this work is the first which achieves these worstcase complexity bounds under such weak conditions for general conic constrained optimization problems.
This is joint work with Pavel Dvurechensky (WIAS Berlin) and based on the paper: https://arxiv.org/abs/2111.00100

3:40pm4:05pm 
Foyer 
Coffee Break


4:05pm5pm 
Turing Room 
Francis Bach (Paris)
SumofSquares Relaxations for Information Theory and Variational Inference (online talk)
(show
abstract)
We consider extensions of the Shannon relative entropy, referred to as \(f\)divergences. Three classical related computational problems are typically associated with these divergences: (a) estimation from moments, (b) computing normalizing integrals, and (c) variational inference in probabilistic models. These problems are related to one another through convex duality, and for all them, there are many applications throughout data science, and we aim for computationally tractable approximation algorithms that preserve properties of the original problem such as potential convexity or monotonicity. In order to achieve this, we derive a sequence of convex relaxations for computing these divergences from noncentered covariance matrices associated with a given feature vector: starting from the typically nontractable optimal lowerbound, we consider an additional relaxation based on ``sumsofsquares'', which is is now computable in polynomial time as a semidefinite program, as well as further computationally more efficient relaxations based on spectral information divergences from quantum information theory (see https://arxiv.org/abs/2206.13285 for details).

5pm6pm 
Foyer 
Drinks


Location
Amsterdam Science Park Congress Center, Science Park 125, 1098 XG Amsterdam (google maps).
Organizers
Daniel Dadush (CWI), Jesper Nederlof (Utrecht)
Sponsors
