### PhD Student Position:

I am looking for a talented and motivated student with a strong interest in any of the subjects below (detailed descriptions):
• Algorithms for lattice problems and their applications to integer programming, cryptography and wireless communications.
• High dimensional convex geometry and the geometry of numbers.
• Discrepancy minimization and its applications to approximation algorithms.
• Algorithms for linear programming, analysis of the simplex method and the geometry of polyhedra.
The student will conduct research on the subject of High Dimensional Geometric Algorithms, present the fruits of this research at top conferences and publish them in quality peer-reviewed journals of the field, culminating in a PhD thesis to be defended in public.

### Requirements:

The PhD candidate is required to have a strong background in computer science and or mathematics. Candidates are expected to have an excellent command of English, and good academic writing and presentation skills. The PhD candidate will be expected to perform independent research leading to publications within top conferences and journals.

### Terms and conditions:

The candidate will be appointed for an initial period of four years at CWI. The terms of employment at CWI are in accordance with the Dutch Collective Labour Agreement for Research Centres ("CAO-onderzoeksinstellingen"). The gross monthly salary, for a PhD student on a full time basis, is €2,211 during the first year and increases to approx. €2,834 over the four year period, the latter is according to the Universities terms of employment. Employees are also entitled to a holiday allowance of 8% of the gross annual salary and a year-end bonus of 8.33%. CWI offers attractive working conditions, including flexible scheduling and help with housing for expat employees. Please visit our website for more information about our terms of employment: cwi.nl/terms-of-employment.

### Application & Information:

Applications can be sent to dadush@cwi.nl. All applications should include:
• A detailed CV including a summary of technical and scientific experiences and a synopsis of the diploma thesis. If applicable, the CV may contain detailed descriptions of previous research projects, a list of publications, and/or presentations at scientific meetings.
• A letter of motivation including research interests, reasons for applying for this position.
• Undergraduate level certificates including university grades and the detailed list of university courses (with grades).
• Two letters of recommendations from references who are acquainted with the applicants previous academic and/or research/professional activity.
The position is available immediately though the starting date is flexible. Applications will be accepted until the position is filled.

### Integer Programming and the Structure of Lattice Points

Description: Ever since the 1980's, following the breakthrough works of Lenstra 82 and Kannan 87, the complexity to for solving an integer inequality systems $Ax \leq b, x \in \Z^n$ on n variables (IP) has been stuck at $n^{O(n)}$. The main bottleneck is a beautiful question in the geometry of numbers: what is the best way to decompose an almost integer point free n-dimensional convex body? A conjecture of Kannan & Lovasz 88 posits that there is an "easy way" to decompose any such body using $(\log n)^{O(k)}$ pieces of dimension $n-k$ for the right choice of $k$, which leaves open the possibility of a $(\log n)^{O(n)}$ algorithm for IP. This conjecture has very recently been verified for the important special case of ellipsoids together with an efficient algorithm for computing these decompositions.

Given the recent progress, the main thrust of this project will be to fully resolve the Kannan & Lovasz conjecture. This will involve developing new tools in convex geometry and the geometry of numbers.

### Discrepancy Minimization

Description: A crucial task in many approximation algorithms is to round a fractional solution to a nearby zero-one solution. Discrepancy minimization is a powerful and flexible technique for performing this rounding while incurring "small error". Most generally, one considers the problem of given a norm (which quantifies the error), desired error $\epsilon$ and a point $t$ in the $[0,1]^n$ cube, to find a 0/1 point in the cube at distance $\epsilon$ from $t$ (assuming it exists). Techniques to solve this problem were recently applied to improve the additive approximation for the classical bin packing problem, and many more applications seem to be on the horizon.

An important research direction we will pursue, which was initiated by the breakthrough work of Bansal in 2010, is to design efficient algorithms for achieving discrepancy guarantees that were previously only known existentially. In particular, we will focus on making algorithmic a result of Banaszczyk for the so-called vector balancing problem, which has numerous applications. Improving on the above vein, we will find ways to make known discrepancy algorithms more efficient and flexible. In particular, it is often the case that one would like to be able to combine the discrepancy guarantees from different algorithms together, and understanding when this is possible will be an important focus of the research.

As a second major direction, there are many discrepancy questions where the best known existential bounds are believed to be far from tight. One such question is one of Steinitz, which asks when we can permute an input sequence of vectors such that its prefix sums have small norm. Both improved algorithms and bounds for this problem would have many interesting consequences.

### Lattice Coding

Description: Lattices are "regular periodic grids" that are very useful for encoding messages either securely or in a noise robust way. In both settings, decoding generally corresponds to finding the closest lattice point to a given target, however the encoding processes while similar are conceptualized differently. In the cryptographic setting, a message is generally encoded as randomly perturbed lattice point, where a public description of the lattice (public key) is generated together with side information (secret key) so that it's easy to decode using the secret key and difficult without it. For error tolerant communication, we encode the message as a lattice point and assume that the channel perturbs it with noise which we'd like to recover the original message from.

Notice that the main difference in both cases is in who is adding the noise, i.e. for secure communication we add the noise ourselves to hide the message, and in error tolerant communication we assume that "nature" adds the noise. While both problems share striking similarities, cryptographers thinking about secure communication and coding theorists thinking about noise robust communication rarely interact to share tools and techniques. Recently, new tools have been developed in both fields that can be exploited for cross fertilization, e.g. the maturing technology of discrete Gaussian sampling in cryptography and lattice algorithms and polar lattice codes in coding theory.

The main goal of this project will be to use the recent techniques from both fields to devise novel encoding and decoding algorithms for both extant and new lattice families, as well as find novel applications of these techniques across disciplines.

### Linear Programming

Description: Linear programming (LP), i.e. solving systems of the form $\max c \cdot x, Ax \leq b, x \in \R^n$, has a vast number of applications and represents one of the most fundamental classes of Optimization problems. To cope with the growing complexity of analyzing and planning based on a deluge of data, there is a growing need to be able to solve huge LPs that many current methodologies are unable to handle. On a more basic level, the factors determining how difficult LPs are to solve from a theoretical perspective are still poorly understood.

The research in this area will proceed along multiple fronts. Firstly, we will explore the power of simple rescaling algorithms for LP, which were developed over the last ten years to balance simplicity of implementation with theoretical efficiency. Here the goal will be to develop methods which are both practical for large instances and on the theoretical level help convincingly explain the geometry controlling the complexity of LP.

Over the last few years, tremendous theoretical progress has been made in the context of interior point methods, which represent the current most theoretically efficient algorithms for LP. While rescaling methods and interior point methods both heavily exploit the geometric properties of linear programs, the insights generated from both approaches have remained relatively orthogonal. A main goal of the research here will be to bridge this divide and fruitfully combine the techniques from both areas to yield faster LP algorithms.

As a final research vein, we will improve our understanding of the most popular method used in practice today: the simplex method developed by Dantzig 60 years ago. While not polynomial in general, the method is heavily used because of its good practical performance on "real life instances", and perhaps most importantly, because it is very well adapted for solving sequences of related linear programs (often arising from adding cuts to LP relaxations of integer programs). One major research push will be to improve known bounds for the running time of the simplex method on "realistic instances", in particular, using the smoothed analysis framework of Spielman and Teng. Secondly, we will attempt to give theoretical justification for why the simplex method performs well on related LPs, a subject which has received almost no theoretical attention.