Program



The workshop will be held on Wednesday, July 20, 2016 in room Chasm Creek B. The following talks will be given. Where available, the slides of the presentation can be downloaded by clicking the title of the talk.

16:10-16:20 Welcome and opening
Peter A.N. Bosman and John McCall

16:20-16:40 The Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA): what and why?
Dirk Thierens (Utrecht University)
      Abstract: TBA.
16:40-17:00 The Long (and Fun) Road that Led to DSMGA-II
Tian-Li Yu (National Taiwan University)
      Abstract: Most early efforts concerning the model based evolutionary algorithms focused on probabilistic modeling, while my research more centered in deterministic modeling such as matrix constructed via pairwise linkage detections. Compared with multivariate detections, pairwise ones less risk overfitting the current population. I will talk about the investigation of several linkage detection metrics and show that for separable problems, optimum thresholds make most metrics indifferent.

For problems with overlapping building block structures, I will discuss the idea of minCut and minCut+, which makes the most use of the building block structures. However, threshold finding is still a crucial issue. The linkage tree genetic algorithm (LTGA) proposed in 2010 drops the need of threshold (linkage tree) and the decision making is noiseless (the optimum mixing), which significantly reduces the population requirement. Inspired from LTGA, the dependency structure matrix genetic algorithm II (DSMGA-II) proposed in 2015 consists of four primary components: pair-wise linkage detection, incremental linkage set, restricted mixing and back mixing. In this talk, I will discuss the ideas behind these techniques, along with previous works that led to them.

17:00-17:20 When the gray box was opened, model-based evolutionary algorithms were already there
Roberto Santana (University of the Basque Country)
      Abstract: The concept of gray box optimization, in juxtaposition to black box optimization, revolves about the idea of exploiting the problem structure toimplement more efficient evolutionary algorithms (EAs). A number of approaches in evolutionary computation have previously addressed the question of exploiting the problem information for enhancing the performance of EAs. In particular, factorization-based distribution algorithms whose factorizations are directly derived from the problem structure have provided important gains in terms of search efficiency. Similarly, methods for problem structure approximation based on statistical techniques have been consistently used for improving search efficiency and unveiling previous unknown information about real-world optimization problems. This talk discusses the relationship between gray box optimization, factorization-based EAs and methods for automatic discovery of problem structure. We argue that a path to further exploit the a priori known structural information about a given problem is the conception of more efficient sampling methods. Recent work on the application of sampling methods originated in the field of statistical physics to model-based EAs is discussed. Finally, we address the relationship between EAs that exploit information about the structure of a given problem and evolvability, where the second is understood as the ability of a population to generate heritable phenotypic variation.
17:20-17:40 Adding value to optimisation by interrogating fitness models
Sandy Brownlee (University of Stirling)
      Abstract: Model based evolutionary algorithms (MBEAs) have proven to be successful for a wide range of challenging combinatorial optimisation problems. During the search process of an MBEA, a model of existing (usually high-fitness) solutions is constructed. This used to guide the search, typically by sampling the model to generate new solutions that are (hopefully) high in fitness. The model can take many different forms, but essentially the aim is that it captures the characteristics which make solutions "good” for a given problem. In this respect, the model is an explicit representation of part of the fitness function. This can be exploited to add value to the optimisation process: interrogating the model can provide the decision maker with helpful additional information beyond the optimal solutions. As the creation of this model is already part of the search process, such information is obtained almost "for free", and can include things such as the sensitivity of the fitness function to the problem variables and linkage structures. It may even point towards the global optima for the problem before they are explicitly found by the search algorithm.

Previous publications described the Markov network fitness model (MFM), within the Estimation of Distribution Algorithm, DEUM. The MFM is closely related to Walsh decompositions of fitness functions, and as such can offer human-interpretable information about the problem. This talk recaps the MFM, and how the model parameters can point towards optimal solutions. Some previously published results are summarised, before introducing some new results for well-known fitness functions. The relationship between the model parameters and structure is discussed, showing how the MFM parameters reveal sensitivities of variables and the strength of linkage. The talk will conclude with some thoughts as to how this may be of benefit to decision makers.

17:40-18:00 BODI: Black-box Optimization by Deterministic Identification
Manuel Valenzuela-Rendón (Tecnológico de Monterrey, Campus Monterrey)
      Abstract: This talk presents a new approach to the binary, black-box optimization problem, which traditionally has been attacked using stochastic methods. BODI (Black-box optimization by Deterministic Identification) is a family of deterministic algorithms that identify the objective function, building an exact model of it, and then obtain the optima by examining this model without any further function evaluations. BODI algorithms are presented that optimize functions that can be modeled as a summation of sub-functions of bounded order, and that can be non-overlapping, overlapping in a structured manner, or randomly overlapping. BODI algorithms do not use a population of potential solutions, do not rely on statistical estimation of hyper-plane evaluations or linkage among variables. Nevertheless, the BODI algorithms are competitive with other competent algorithms like the fast messy Genetic Algorithms (fmGA) and the hierarchical Bayesian Optimization Algorithm (hBOA). Since BODI algorithms are deterministic, they find the optimal of the objective function in every single run. BODI algorithms can also be modified to find all the optima in a single run. The mathematical foundations of the BODI approach will be explained; experimental results for different types of problems will be shown; finally, avenues for future research will be discussed.

18:00 Closing
Peter A.N. Bosman and John McCall