ThRaSH Seminars Series
About ThRaSH
Randomized search heuristics such as stochastic gradient methods, simulated annealing, evolutionary algorithms, stochastic neural networks for optimization, ant colony and swarm optimization, and the cross-entropy method are frequently used across many scientific communities. They have been successfully applied in various domains, both for combinatorial and numerical optimization. Despite their success in practice, proving that such algorithms satisfy certain performance guarantees is still a difficult and widely open problem.
The mission of the theory of randomized search heuristics (ThRaSH) seminar series is to contribute to the theoretical understanding of randomized search heuristics, in particular of their computational complexity. The aim is to stimulate interactions within the research field and between people from different disciplines working on randomized algorithms. The primary focus is on discussing recent ideas and detecting challenging topics for future work, rather than on the presentation of final results.
Steering Committee (Alphabetic order)
- Benjamin Doerr (École Polytechnique, France)
- Thomas Jansen (Aberystwyth University, UK)
- Timo Kötzing (Hasso Plattner Institute Potsdam, Germany)
- Per Kristian Lehre (University of Birmingham, UK)
- Pietro S. Oliveto (Southern University of Science and Technology, China)
- Carsten Witt (Technical University of Denmark)
Talks in Spring 2024
When? On Mondays at 14:00 CEST (12:00 UTC), starting from May 13 and ending on June 24; however, no talk on May 20
Where? Here is a permanent zoom link.
Mailing list? Subscribe to the seminar mailing list here. If you want to get all the ThRaSH-related news, subscribe to this list as well.
Organizers: Benjamin Doerr and Carsten Witt.
Schedule
- 13 May: Dirk Sudholt, Evolutionary Computation Meets Graph Drawing: Runtime Analysis for Crossing Minimisation on Layered Graph Drawings
- 27 May: Johannes Lengler, Population Diversity in Fitness-Neutral Landscapes
- 3 June: CANCELLED
- 10 June: Denis Antipov, Reconstructing a true individual from a noisy one: a new perspective gives new insights
- 17 June: Martin Krejca, A Flexible Evolutionary Algorithm With Dynamic Mutation Rate Archive:
- 24 June: Chao Qian, Quality-Diversity: Advances in Theories and Algorithms
Abstracts of the talks
- 13 May 2024
-
Evolutionary Computation Meets Graph Drawing: Runtime Analysis for Crossing Minimisation on Layered Graph Drawings — Dirk Sudholt, University of Passau, Germany
Graph Drawing aims to make graphs visually comprehensible while faithfully representing their structure. In layered drawings, each vertex is drawn on a horizontal line and edges are drawn as y-monotone curves. A key ingredient for constructing such drawings is the One-Sided Bipartite Crossing Minimization (OBCM) problem: given two layers of a bipartite graph and a fixed horizontal order of vertices on the first layer, the task is to order the vertices on the second layer to minimise the number of edge crossings. We analyse the performance of simple evolutionary algorithms for OBCM and compare different operators for permutations: exchanging two elements, swapping adjacent elements and jumping an element to a new position. We show that the simplest and cheapest mutation operator, swap, shows excellent performance on instances that can be drawn crossing-free, which correspond to a generalised sorting problem. We give a tight runtime bound of Θ(n2) via a parallel BubbleSort algorithm and a delay sequence argument. This gives a positive answer to an open problem from Scharnow, Tinnefeld, and Wegener (2004) on whether the best known bound of O(n2 log n) for sorting in permutation spaces can be improved to Θ(n2), albeit for an even simpler operator.
- 27 May 2024
-
Population Diversity in Fitness-Neutral Landscapes — Johannes Lengler, ETH Zurich, Switzerland
One one the major advantages of population-based optimization heuristics like genetic algorithms is that we can recombine two or more solutions into a new one by crossover. In practice, crossover is known to be extremely helpful, and we would also like to understand how much it helps in theoretical benchmarks. However, there is one obstacle: The effectiveness of crossover depends on the population diversity (e.g., measured by average Hamming distance of the solutions), so we need to understand how the diversity of a population evolves over time.
We answer this question under a seemingly very strong assumption: for a flat objective function, i.e., in absence of fitness signals. We show that in this case, surprisingly, the details of algorithm have almost no influence on the diversity. Specifically, for the (μ+1) Genetic Algorithm we show that diversity approaches an equilibrium which (almost) does not depend on the used mutation or crossover operators. The equilibrium point increases linearly with the population size.
Although flat objective functions are seemingly uninteresting, the result turned out to be surprisingly useful. I will give one application: for Jump, a standard benchmark for local optima, the runtime of the (μ+1) GA with a small modification is reduced massively by crossover, because the natural diversity in this range is large enough to speed up optimization.
- 10 June 2024
Reconstructing a true individual from a noisy one: a new perspective gives new insights — Denis Antipov, The University of Adelaide, Australia
EAs are often used to solve problems in uncertain environments, e.g., for optimization of dynamic functions or functions with stochastic noise. They have been shown to be effective in such circumstances, however the theoretical understanding of why they are robust is very limited. In this talk, we focus on the prior noise, that is, when the individuals might be randomly changed before we evaluate their fitness. The previous works which considered this setting showed that the (1 + 1) EA can withstand only low rates of noise, while population-based EAs seem to be quite robust even to a strong noise. However, the analysis of EAs on the noise is often tailored to the considered problems, which is caused by its complexity: even the elitist algorithms which are usually described by simple stochastic processes might adopt a much more complex non-elitist behavior.
In this talk, we describe a new method of analysis for prior noise problems. It is based on the observation that when we know both the parent and its offspring affected by the noise, we can estimate the distribution of the true noiseless offspring as if it was created via crossover between the parent and the noisy offspring. With this perspective, we show that the (1 + λ) EA and the (1, λ) EA with only a logarithmic population size are very robust even to constant-rate noise (the one which affects a constant fraction of all fitness evaluations): they can solve noisy OneMax in O(n log(n)) fitness evaluations, which is the same as in the noiseless case.
- 17 June 2024
A Flexible Evolutionary Algorithm With Dynamic Mutation Rate Archive — Martin Krejca, Ecole Polytechnique, France
We propose a new, flexible approach for dynamically maintaining successful mutation rates in evolutionary algorithms using k-bit flip mutations. The algorithm adds successful mutation rates to an archive of promising rates that are favored in subsequent steps. Rates expire when their number of unsuccessful trials has exceeded a threshold, while rates currently not present in the archive can enter it in two ways: (i) via user-defined minimum selection probabilities for rates combined with a successful step or (ii) via a stagnation detection mechanism increasing the value for a promising rate after the current bit-flip neighborhood has been explored with high probability. For the minimum selection probabilities, we suggest different options, including heavy-tailed distributions. We conduct rigorous runtime analysis of the flexible evolutionary algorithm on the OneMax and Jump functions, on general unimodal functions, on minimum spanning and on a class of hurdle-like functions with varying hurdle width that benefit particularly from the archive of promising mutation rates. In all cases, the runtime bounds are close to or even outperform the best known results for both stagnation detection and heavy-tailed mutations.
(joint work with Carsten Witt)
- 24 June 2024
-
Quality-Diversity: Advances in Theories and Algorithms — Chao Qian, Nanjing University, China
Quality-Diversity (QD) algorithms are a new type of evolutionary algorithms with the aim of generating a set of high-quality and diverse solutions. They have found many successful applications in reinforcement learning, robotics, scientific designs, etc. However, several challenges hinder the widespread application of QD, including the lack of theoretical foundation, the difficulty in behavior definition, and the low sample and resource efficiency. In this talk, I will introduce our recent efforts towards addressing these challenges. Firstly, we provide theoretical justification for one major benefit of QD, i.e., QD algorithms can be helpful for optimization. Secondly, we propose to learn behavior from human feedback. Thirdly, we design a series of strategies (including efficient parent selection, solution representation, and population management) to improve the sample and resource efficiency of QD algorithms. Finally, we give an application of QD in human-AI coordination.
(These works are published at ICLR’22/24, NeurIPS’23 ALOE Workshop, IJCAI’23/24, and ICML’24, which are mainly collaborated with Qian's PhD students Ke Xue and Ren-Jian Wang.)