ThRaSH Seminars - Winter 2021 and Spring 2022

About ThRaSH

Randomized search heuristics such as stochastic gradient methods, simulated annealing, evolutionary algorithms, stochastic neural networks for optimization, ant colony and swarm optimization, and the cross-entropy method are frequently used across many scientific communities. They have been successfully applied in various domains, both for combinatorial and numerical optimization. Despite their success in practice, proving that such algorithms satisfy certain performance guarantees is still a difficult and widely open problem.

The mission of the theory of randomized search heuristics (ThRaSH) seminar series is to contribute to the theoretical understanding of randomized search heuristics, in particular of their computational complexity. The aim is to stimulate interactions within the research field and between people from different disciplines working on randomized algorithms. The primary focus is on discussing recent ideas and detecting challenging topics for future work, rather than on the presentation of final results.

Steering Committee (Alphabetic order)

Seminars Winter 2021 and Spring 2022

Organisers:

Zoom link for all talks:

Due to holidays and deadlines, this round seminars will be divided into two sessions, Winter 2021 (December 2021) and Spring 2022 (March-May 2022).

Crossover-Aided Biased Walking on Plateaus

Maxim Buzdalov May 13th 2022, (9:00-10:00 UTC)

Many discrete optimization problems feature plateaus, which are hard to evolutionary algorithms due to the lack of fitness guidance. While higher mutation rates may assist in making a jump from the plateau to some better search point, an algorithm is otherwise doomed to perform random walks on a plateau, possibly with some assistance from diversity mechanisms.

In this talk, I present a new mechanism that helps escaping from the plateau by introducing additional drift towards promising points. It has been discovered when analysing why the (1+(λ,λ)) GA with standard parameters solves certain instances of the vertex cover problem exponentially faster than the (1+1) EA. An intricate interplay between the problem structure and the way crossovers are used results in a drift towards the points where finding the next improvement is much easier. Despite apparent subtlety, this mechanism seems to be active on a large fraction of randomly generated vertex cover problem instances, which I find surprising and inspiring.

There will be no seminar on May 6th 2022 due to the Dagstuhl seminar.

Running Time Analysis of the Non-dominated Sorting Genetic Algorithm II (NSGA-II) using Binary or Stochastic Tournament Selection

Chao Bian (Nanjing University) April 29th 2022, (9:00-10:00 UTC)

Evolutionary algorithms (EAs) have been widely used to solve multi-objective optimization problems, and have become the most popular tool. However, the theoretical foundation of multi-objective EAs (MOEAs), especially the essential theoretical aspect, i.e., running time analysis, has been still largely underdeveloped. The few existing theoretical works mainly considered simple MOEAs, while the non-dominated sorting genetic algorithm II (NSGA-II), probably the most influential MOEA, has not been analyzed except for a very recent work considering a simplified variant without crossover. In this paper, we present a running time analysis of the standard NSGA-II for solving LOTZ, OneMinMax and COCZ, the three commonly used bi-objective optimization problems. Specifically, we prove that the expected running time (i.e., number of fitness evaluations) is \(O(n^3)\) for LOTZ, and \(O(n^2\log n)\) for OneMinMax and COCZ, which is surprisingly as same as that of the previously analyzed simple MOEAs, GSEMO and SEMO. Next, we introduce a new parent selection strategy, stochastic tournament selection (i.e., \(k\) tournament selection where \(k\) is uniformly sampled at random), to replace the binary tournament selection strategy of NSGA-II, decreasing the required expected running time to \(O(n^2)\) for all the three problems. Experiments are also conducted, suggesting that the derived running time upper bounds are tight for LOTZ, and almost tight for OneMinMax and COCZ.

Benchmarking Evolutionary Computation Methods - What’s in for Theoreticians?

Carola Doerr (Sorbonne Université) April 22th 2022, (9:00-10:00 UTC)

Theory of evolutionary computation (EC) is sometimes criticized for being too much disconnected from the applications on which EC approaches shine. In my opinion, there are two sides to this statement: On the one hand, I do NOT think that the gap between real-world applications of EC methods and the ``sterile'' environments for which we can prove runtime guarantees, black-box complexity statements, etc. should worry us. However, I also believe that we can do better in communicating our insights, and in making our work more accessible (and hence more valuable) for practitioners. In this presentation, I will argue that benchmarking is a powerful tool in this regard. I will also show that benchmarking offers much more, e.g., by providing valuable inspiration for theoretical work. Participants interested in this discussion are cordially invited to the Benchmarking@PPSN2022 workshop which has as focus topic the role of benchmarking as mediator between theory and practice: https://sites.google.com/view/benchmarking-network/home/activities/ppsn-2022-workshop

There will be no seminar on April 15th 2022 due to the public holiday in several countries.

Success-Based Population Sizes for Non-Elitist Evolutionary Algorithms

Mario A. Hevia Fajardo (University of Sheffield) April 8th 2022, (9:00-10:00 UTC)

Success-based parameter control mechanisms are non-static parameter choices for evolutionary algorithms (EAs) that change the parameter values from one generation to the other based on the success (or lack of it) of the previous generation. Despite their simplicity, these mechanisms have been shown to perform as good as or better than static parameter choices. However, most runtime analyses focus on elitist EAs and we are only starting to understand their behaviour when applied to non-elitist EAs.

In this talk, I will report about recent research on the runtime analysis of success-based mechanisms for non-elitist EAs. More precisely, I will present the results of recent work on the self-adjusting (1, λ) EA, where it was shown to optimise OneMax and Cliff in \(O(n \log n)\). However, this only holds if the success rate \(s\) that governs self-adjustment is small enough. Otherwise, the self-adjusting (1, λ) EA stagnates on an easy slope, where frequent successes drive down the offspring population size. On the other hand, in the absence of easy slopes, the algorithm is able to optimise any function in at most the same asymptotic time as its elitist counterpart independent of the choice of \(s\). I will also explain the useful framework used to prove all the results above.

(Joint work with Dirk Sudholt)

First Runtime Analyses of Competitive Population-based Co-Evolutionary Algorithms

Per Kristian Lehre (University of Birmingham) April 1st 2022, (9:00-10:00 UTC)

Co-evolutionary algorithms have a wide range of applications, such as in hardware design, evolution of strategies for board games, and patching software bugs. However, these algorithms are poorly understood and applications are often limited by pathological behaviour, such as loss of gradient, relative over-generalisation, and mediocre objective stasis. It is an open challenge to develop a theory that can predict when co-evolutionary algorithms find solutions efficiently and reliably.

This talk describes, to our knowledge for the first time, runtime analysis of population-based competitive co-evolutionary algorithms. We provide a mathematical framework for describing and reasoning about the performance of co-evolutionary processes. An example application of the framework shows a scenario where a simple co-evolutionary algorithm obtains a solution in polynomial expected time. We also describe settings where co-evolutionary algorithms need exponential time with overwhelmingly high probability.

Notice: From March 27th onwards, we will switch the time slot to 9-10 UTC = 11-12 European time = 17-18 Beijing time due to the daylight saving time.

How do Heuristic Search Algorithms Filter Noise?

Timo Kötzing (Hasso Plattner Institute) March 25th 2022, (10:00-11:00 UTC)

I consider a simple noisy optimization problem: the value of a bit string is its number of 1s plus a random variable drawn from a (centered) Gaussian distribution. It was no surprise to me that simple hill-climbers (like random local search or the 1+1 Evolutionary Algorithm) cannot handle even small noise levels. It was a big surprise to me that some other algorithms (like some estimation of distribution algorithms and crossover-based algorithms) can deal even with big noise effectively. I came to understand this latter phenomenon as essentially the result of averaging over many iterations, and some parameters have to be scaled up in order to deal with higher noise.

However, even considering these effects, some algorithms were unreasonably effective. In my talk I want to discuss some further findings that seem to indicate that, for example, the compact genetic algorithm can handle Gaussian noise with O(n) variance (which is a lot!) without any scaling of parameters or any other tricks – just out of the box. This is thanks to sampling widely different individuals a lot of the time, which I will elaborate on in my talk.

First Runtime Analyses for the NSGA-II

Benjamin Doerr (École Polytechnique) March 18th 2022, (10:00-11:00 UTC)

In this talk, I will report about recent research on the runtime of the NSGA-II multi-objective evolutionary algorithm, more precisely, the results just presented at AAAI and those obtained in the time up to the talk. The abstract of the former is as follows. This is joint work with Weijie Zheng and Yufei Liu.

The non-dominated sorting genetic algorithm II (NSGA-II) is the most intensively used multi-objective evolutionary algorithm (MOEAs) in real-world applications. However, in contrast to several simple MOEAs analyzed also via mathematical means, no such study exists for the NSGA-II so far. In this work, we show that mathematical runtime analyses are feasible also for the NSGA-II. As particular results, we prove that with a population size larger than the Pareto front size by a constant factor, the NSGA-II with two classic mutation operators and three different ways to select the parents satisfies the same asymptotic runtime guarantees as the SEMO and GSEMO algorithms on the basic OneMinMax and LeadingOnesTrailingZeros benchmark functions. However, if the population size is only equal to the size of the Pareto front, then the NSGA-II cannot efficiently compute the full Pareto front (for an exponential number of iterations, the population will always miss a constant fraction of the Pareto front). Our experiments confirm the above findings.

Simulated Annealing and Metropolis Algorithm Revisited in Pseudo-Boolean Optimization

Carsten Witt (Technical University of Denmark) December 17th 2021, (10:00-11:00 UTC)

The Metropolis Algorithm (MA) and its generalization Simulated Annealing belong to the oldest randomized search heuristics known. Runtime analyses of MA and SA date back to the 1980s and 1990s; however, there are only few recent results on their runtime behavior. We revisit classical runtime analyses of MA and SA by Wegener (2005) and Jansen and Wegener (2007) and prove partially stronger results using state-of-the-art methods for the analysis. Moreover, we investigate MA on a generalized cliff function, where both the location of the cliff point and the fitness decline after the cliff can be adjusted, and prove different regimes of runtimes.

(Joint work with with Benjamin Doerr, Taha El Ghazi and Amirhossein Rajabi)

On runtime of non-elitist evolutionary algorithms optimizing fitness functions with plateaus

Anton Eremeev (Sobolev Institute of Mathematics) December 10th 2021, (10:00-11:00 UTC)

We consider the expected runtime of evolutionary algorithms (EAs) without elite individuals, based on the bitwise mutation, when they are applied to optimize fitness functions with plateaus of constant fitness. To this end, we consider two types of fitness functions: the Plateau_k function with a plateau of second-best fitness in a ball of radius k around the unique optimum, and the Royal Road function. For the Plateau_k function, we obtain polynomial upper bounds on the expected runtime for some modes of non-elitist EAs based on an unbiased mutation, in particular, the bitwise mutation. This is shown using the level-based theorems. We also note that the EA with fitness-proportionate selection is inefficient for this function, if the bitwise mutation is used with the standard settings of mutation probability. For the Royal Road function, we obtain lower and upper bounds on the expected runtime, which generalize the previously known bounds for the OneMax function.

A simple proof of the general \(\Omega(n \log n)\) lower bound for mutation-based evolutionary algorithms

Jonathan Rowe (University of Birmingham) December 3rd 2021, (10:00-11:00 UTC)

A recent paper by Lehre and Sudholt proved general lower bounds for mutation-based algorithms which use unbiased unary mutation operators. In the special case where mutation is performed bitwise according to some mutation rate, we show that a similar result can be proved rather easily. The emphasis of the Lehre and Sudholt paper is on the effect of the population size, whereas our result shows the effect of the mutation rate. In particular, small mutation rates can lead to stronger lower bounds.