MBEA 2016

Model-Based Evolutionary Algorithms

Organized by: Peter A.N. Bosman and John McCall

Workshop as part of the Genetic and Evolutionary Computation Conference (GECCO 2016)
on July 20-24, 2016 (Saturday - Wednesday), Denver, Colorado

Workshop date: July 20, 2016

           



Genetic and evolutionary algorithms (GEAs) evolve a population of candidate solutions to a given optimization problem using two basic operators: (1) selection and (2) variation. Selection introduces a pressure toward high-quality solutions, whereas variation ensures exploration of the space of all potential solutions. Two variation operators are common in genetic and evolutionary computation: (1) crossover, and (2) mutation. Crossover creates new candidate solutions by combining bits and pieces of promising solutions, whereas mutation introduces slight perturbations to promising solutions to explore their immediate neighborhood.

However, fixed, problem-independent variation operators often fail to effectively exploit important features of high-quality selected solutions, potentially leading to inefficient optimization in cases where a performance advantage can be gained by using variation operators that are informed by learnable problem features.

One way to make variation operators more powerful and flexible is to replace the traditional variation of GEAs by

  1. Modelling key features of solutions that influence their quality, and
  2. Generate a new population of candidate solutions using the model in the expectation of improved solution quality.

When the model is a probability distribution, such evolutionary algorithms are commonly called estimation-of-distribution algorithms (EDAs). This includes such algorithms as PBIL, UMDA, CGA, ECGA, EBNA, LFDA, BOA, hBOA, PBIL_C, EGNA, EMNA, DEUM, AMaLGaM, CMA-ES, ACO and natural-gradient-based optimization algorithms, including NES and xNES.

EDAs in fact belong to a broader class of model-based evolutionary algorithms (MBEA) that learn and store more general structure such as linkage, variable dependency structures and hypergraphs or that operate on ensembles of models. Examples include LTGA and DSMGA(-II) which do not construct a probabilistic model. Such algorithms have the potential to be more robust to changes in problem formulation, making them generally more attractive to solve black-box optimization problems.

Conversely, since their search trajectories are determined by explicit models, model-based algorithms are more amenable to theoretical study including approaches such as run-time analysis. Understanding gained here can lead to more principled algorithm design, informed selection of suitable representations and generalisation beyond empirical benchmark testing.

With the ending of the EDA track in the general GECCO conference, the focus on MBEA potentially becomes scattered across different tracks. The purpose of this workshop is therefore to provide a unique forum to discuss

In support of these goals, the organizers will invite well-known researchers that are active in the design and application of model-based evolutionary algorithms to give a talk in the MBEA workshop. Moreover, a panel discussion will be organized to discuss the unified future of model-based evolutionary algorithms.