Program
The workshop will be held on Wednesday, July 20, 2016 in room Chasm Creek B. The following talks
will be given. Where available, the slides of the presentation can be downloaded by clicking the
title of the talk.
16:1016:20

Welcome and opening
Peter A.N. Bosman and John McCall

16:2016:40

The Genepool Optimal Mixing Evolutionary Algorithm (GOMEA): what and why?
Dirk Thierens (Utrecht University)

16:4017:00

The Long (and Fun) Road that Led to DSMGAII
TianLi Yu (National Taiwan University)

Abstract:
Most early efforts concerning the model based evolutionary
algorithms focused on probabilistic modeling, while my
research more centered in deterministic modeling such as
matrix constructed via pairwise linkage detections. Compared
with multivariate detections, pairwise ones less risk
overfitting the current population. I will talk about the
investigation of several linkage detection metrics and show
that for separable problems, optimum thresholds make most
metrics indifferent.
For problems with overlapping building block structures, I
will discuss the idea of minCut and minCut+, which makes the
most use of the building block structures. However,
threshold finding is still a crucial issue. The linkage tree
genetic algorithm (LTGA) proposed in 2010 drops the need of
threshold (linkage tree) and the decision making is
noiseless (the optimum mixing), which significantly reduces
the population requirement. Inspired from LTGA, the
dependency structure matrix genetic algorithm II (DSMGAII)
proposed in 2015 consists of four primary components:
pairwise linkage detection, incremental linkage set,
restricted mixing and back mixing. In this talk, I will
discuss the ideas behind these techniques, along with
previous works that led to them.


17:0017:20

When the gray box was opened, modelbased evolutionary algorithms were already there
Roberto Santana (University of the Basque Country)

Abstract:
The concept of gray box optimization, in juxtaposition to
black box optimization, revolves about the idea of
exploiting the problem structure toimplement more efficient
evolutionary algorithms (EAs). A number of approaches in
evolutionary computation have previously addressed the
question of exploiting the problem information for enhancing
the performance of EAs. In particular, factorizationbased
distribution algorithms whose factorizations are directly
derived from the problem structure have provided important
gains in terms of search efficiency. Similarly, methods for
problem structure approximation based on statistical
techniques have been consistently used for improving search
efficiency and unveiling previous unknown information about
realworld optimization problems. This talk discusses the
relationship between gray box optimization,
factorizationbased EAs and methods for automatic discovery
of problem structure. We argue that a path to further
exploit the a priori known structural information about a
given problem is the conception of more efficient sampling
methods. Recent work on the application of sampling methods
originated in the field of statistical physics to
modelbased EAs is discussed. Finally, we address the
relationship between EAs that exploit information about the
structure of a given problem and evolvability, where the
second is understood as the ability of a population to
generate heritable phenotypic variation.


17:2017:40

Adding value to optimisation by interrogating fitness models
Sandy Brownlee (University of Stirling)

Abstract:
Model based evolutionary algorithms (MBEAs) have proven to
be successful for a wide range of challenging combinatorial
optimisation problems. During the search process of an MBEA,
a model of existing (usually highfitness) solutions is
constructed. This used to guide the search, typically by
sampling the model to generate new solutions that are
(hopefully) high in fitness. The model can take many
different forms, but essentially the aim is that it captures
the characteristics which make solutions "goodâ€ť for a given
problem. In this respect, the model is an explicit
representation of part of the fitness function. This can be
exploited to add value to the optimisation process:
interrogating the model can provide the decision maker with
helpful additional information beyond the optimal solutions.
As the creation of this model is already part of the search
process, such information is obtained almost "for free", and
can include things such as the sensitivity of the fitness
function to the problem variables and linkage structures. It
may even point towards the global optima for the problem
before they are explicitly found by the search algorithm.
Previous publications described the Markov network fitness
model (MFM), within the Estimation of Distribution
Algorithm, DEUM. The MFM is closely related to Walsh
decompositions of fitness functions, and as such can offer
humaninterpretable information about the problem. This talk
recaps the MFM, and how the model parameters can point
towards optimal solutions. Some previously published results
are summarised, before introducing some new results for
wellknown fitness functions. The relationship between the
model parameters and structure is discussed, showing how the
MFM parameters reveal sensitivities of variables and the
strength of linkage. The talk will conclude with some
thoughts as to how this may be of benefit to decision
makers.


17:4018:00

BODI: Blackbox Optimization by Deterministic Identification
Manuel ValenzuelaRendón (Tecnológico de Monterrey, Campus Monterrey)

Abstract:
This talk presents a new approach to the binary, blackbox
optimization problem, which traditionally has been attacked
using stochastic methods. BODI (Blackbox optimization by
Deterministic Identification) is a family of deterministic
algorithms that identify the objective function, building an
exact model of it, and then obtain the optima by examining
this model without any further function evaluations. BODI
algorithms are presented that optimize functions that can be
modeled as a summation of subfunctions of bounded order,
and that can be nonoverlapping, overlapping in a structured
manner, or randomly overlapping. BODI algorithms do not use
a population of potential solutions, do not rely on
statistical estimation of hyperplane evaluations or linkage
among variables. Nevertheless, the BODI algorithms are
competitive with other competent algorithms like the fast
messy Genetic Algorithms (fmGA) and the hierarchical
Bayesian Optimization Algorithm (hBOA). Since BODI
algorithms are deterministic, they find the optimal of the
objective function in every single run. BODI algorithms can
also be modified to find all the optima in a single run. The
mathematical foundations of the BODI approach will be
explained; experimental results for different types of
problems will be shown; finally, avenues for future research
will be discussed.


18:00

Closing
Peter A.N. Bosman and John McCall
