Smoothed Analysis Day at CWI: February 26, 2010
The program is given below:
10:00-10:10
Opening
10:10-11:00
Berthold Vöcking, RWTH Aachen, Germany
Random Knapsack in Expected Polynomial Time
Abstract:
The 0/1 knapsack problem is one of the most intensively studied combinatorial optimization problems having a wide range of applications in industry and financial management, e.g., cargo loading, cutting stock, budget control. The knapsack problem is one of those optimization problems for which NP-hardness theory concludes that it is hard to solve in the worst case. In contrast, large scale instances occuring in practice can often be solved to optimality in almost linear time. We address this gap between theory and practice within a probabilistic (smoothed) analysis.
For various input distributions (covering, e.g., the model of smoothed analysis), we prove that the number of Pareto-optimal solutions to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently. Our polynomial upper bound on the expected number of Pareto-optimal solutions shows that this algorithm solves the knapsack problem in expected polynomial time on random or randomly perturbed instances. For some input distributions, e.g., if the input numbers are drawn independently and uniformly at random, our analysis yields even a bound on the expected running time of so-called knapsack core algorithms being almost linear in the number items.
Joint work with Rene Beier.
Coffee break
11:30-12:30
Heiko Röglin, Maastricht University, The Netherlands
Smoothed Analysis of Multiobjective Optimization
Abstract:
A well established heuristic approach for solving various multicriteria
optimization problems is to enumerate the set of Pareto-optimal
solutions. The heuristics following this principle are often successful
in practice, even though the number of Pareto-optimal solutions can be
exponential in the worst case.
We analyze multiobjective optimization problems in the framework of smoothed analysis, and we prove that the smoothed number of Pareto-optimal solutions in any multiobjective binary optimization problem with a finite number of linear objective functions is polynomial. Moreover, we give polynomial bounds on all finite moments of the number of Pareto-optimal solutions, which yields the first non-trivial concentration bound for this quantity.
This talk is based on joint work with Rene Beier, Shang-Hua Teng, and Berthold Vöcking.
Lunch break
13:30-14:30
Bodo Manthey, University of Twente, The Netherlands
k-Means has Polynomial Smoothed Complexity
Abstract:
The k-means method is one of the most widely used clustering algorithms,
drawing its popularity from its speed in practice. Recently, however, it
was shown to have exponential worst-case running time. In order to close
the gap between practical performance and theoretical analysis, we study
the k-means method in the model of smoothed analysis.
We show that the smoothed running time of the k-means method is bounded by a polynomial in the input size and 1/sigma, where sigma is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.
Additionally, we will discuss extensions of the smoothed analysis to k-means clustering with respect to so-called Bregman divergences, which are a quite general class of distance measures.
This talk is based on joint work with David Arthur (Stanford University) and Heiko Röglin (Maastricht University).
Coffee break
14:55-15:45
Tjark Vredeveld, Maastricht University, The Netherlands
Smoothed Competitive Analysis: A Probabilistic Analysis of the Multi-level Feedback Algorithm
Abstract:
In this talk, we apply the notion of smoothed competitive analysis to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. We use a partial bit randomization model to smoothen the original (integral) processing times of the jobs and show that MLF admits a certain smoothed competitive ratio. In particular, the competitive ratio goes down from the largest processing time in the worst case to a constant competitive ratio when the jobsizes are drawn uniformly at random.
We also show that the smoothed competitive ratio is tight up to a constant factor and that this is the best for any deterministic algorithm that is run on processing times smoothed according to the partial bit randomization model. For various other smoothing models, including the additive symmetric smoothing one, which is a variant of the model used by Spielman and Teng, we give a higher lower bound of \Omega(2^K).
This work is based on joint work with Luca Becchetti, Stefano Leonardi, Alberto Marchetti-Spaccamela and Guido Schäfer.
Coffee break
16:10-17:00
Guido Schäfer, CWI Amsterdam, The Netherlands
Topology Matters: Smoothed Competitive Analysis of Metrical Task Systems
Abstract:
Metrical task systems capture a large class of online problems in
which a server resides in an edge-weighted graph and has to serve a
sequence of tasks that arrive online. The server can move in the graph
at a cost equal to the distance it travels. Each task specifies for
each node a request cost that is incurred if the server determines to
serve the task in that particular node. The objective is to minimize
the total request plus travel cost.
Borodin, Linial and Saks (JACM '92) gave a deterministic online algorithm, called the work function algorithm, for metrical task systems that achieves a competitive ratio of 2n-1, where n refers to the number of nodes in the graph. They also showed that this is best possible.
We present a smoothed competitive analysis of the work function algorithm (WFA) by adding some random noise to the request costs. Our analysis reveals that the smoothed competitive ratio of WFA is much better than 2n-1 and depends on certain topological parameters (such as the maximum degree, diameter, etc.) of the underlying graph.
This talk is based on joint work with Naveen Sivadasan.