ThRaSH Seminars Series, Autumn 2023 – Winter 2024

When? On Tuesdays at 14:00 CE(S)T. This is UTC 12:00 before 29th October and UTC 13:00 after that.

Where? Here is a permanent zoom link.

Mailing list? Subscribe to the seminar mailing list here. If you want to get all the ThRaSH-related news, subscribe to this list as well.

Organizers: Martin Krejca and Denis Antipov.

Schedule

All seminars will be added to this google calendar (the times are displayed in CE(S)T). To subscribe to this calendar, use this link.

Abstracts of the talks

24 October 2023

The Conditioning of Drift Theorems: To Filtrate or Not To Filtrate?Timo Kötzing, Hasso Plattner Institute, Potsdam, Germany

The talk is dedicated to the conditions of the expectations in the drift analysis. While often we need to condition on filtrations, in the analysis of evolutionary algorithms we can use more specific conditions, e.g., condition on the previous states of the process. The goal of this talk is to underline the differences of these approaches and to discuss, which drift theorems conditions would be most useful in the real world.

31 October 2023

First Steps Towards a Runtime Analysis of NeuroevolutionCarsten Witt, Technical University of Denmark, Lyngby, Denmark

We consider a simple setting in neuroevolution where an evolutionary algorithm optimizes the weights and activation functions of a simple artificial neural network. We then define simple example functions to be learned by the network and conduct rigorous runtime analyses for networks with a single neuron and for a more advanced structure with several neurons and two layers. Our results show that the proposed algorithm is generally efficient on two example problems designed for one neuron and efficient with at least constant probability on the example problem for a two-layer network. In particular, the so-called harmonic mutation operator choosing steps of size j with probability proportional to 1/j turns out as a good choice for the underlying search space. However, for the case of one neuron, we also identify situations with hard-to-overcome local optima.

This is joint work with Paul Fischer and Emil Lundt Larsen

7 November 2023

Runtime Analysis of Quality Diversity AlgorithmsDirk Sudholt, the University of Passau, Passau, Germany

Quality diversity (QD) is a branch of evolutionary computation that gained increasing interest in recent years. The Map-Elites QD approach defines a feature space, i.e., a partition of the search space, and stores the best solution for each cell of this space. We study a simple QD algorithm in the context of pseudo-Boolean optimisation on the “number of ones” feature space, where the ith cell stores the best solution amongst those with a number of ones in [(i − 1)k, ik − 1]. Here k is a granularity parameter 1 ≤ k ≤ n + 1. We give a tight bound on the expected time until all cells are covered for arbitrary fitness functions and for all k and analyse the expected optimisation time of QD on OneMax and other problems whose structure aligns favourably with the feature space. On combinatorial problems we show that QD finds a (1 − 1/e)-approximation when maximising any monotone sub-modular function with a single uniform cardinality constraint efficiently. Defining the feature space as the number of connected components of a connected graph, we show that QD finds a minimum spanning tree in expected polynomial time.

We further consider QD’s performance on classes of transformed functions in which the feature space is not well aligned with the problem. The asymptotic performance is unaffected by transformations on easy functions like OneMax. Applying a worst-case transformation to a deceptive problem increases the expected optimisation time from O(n2log(n)) to an exponential time. However, QD is still faster than a (1+1) EA by an exponential factor.

Joint work with Jakob Bossek (a GECCO 2023 paper with a subset of journal extensions)

14 November 2023

The Move Acceptance Hyper-Heuristic Re-RevisitedBenjamin Doerr, École Polytechnique, Palaiseau, France

In their recent paper, Lissovoi, Oliveto, and Warwicker (Artificial Intelligence (2023)) showed that the Move Acceptance Hyper-Heuristic (MAHH) copes extremely well with the local optima of the CLIFF problem, solving this problem in an expected number of O(n3) fitness evaluations and proving that this type of non-elitism can be very efficetive. For the more prevalent JUMP benchmark, the results of [LOW23] are inconclusive, neither dismissing great speed-ups nor drastic performance losses compared to classic evolutionary algorithms (EAs).

In their subsequent GECCO 2023 paper, Dremaux, Doerr, Lutzeyer and Stumpf revisited this problem and determined the runtime of the MAHH on JUMPk to be essentially n2k-1, hence almost the square of the "usual" runtime of θ(nk). However, they also showed that if the 1-bit-flip mutation operator common in hyper-heuristics is replaced by bit-wise mutation, the mutation operator common in EAs, then the runtime drops to just a little more than nk.

We now re-revisit this problem and show that the MAHH with bit-wise mutation solves JUMP problems with at least logarithmic jump size k asymptotically faster than simple mutation-based EAs. This is one of the rare example where an algorithm profits from its non-elitism by really walking through a valley of low fitness through repeated acceptance of inferior solutions. This is joint work with Johannes Lutzeyer (Ecole Polytechnique).

21 November 2023

A proof that crossover can guarantee exponential speed-ups in evolutionary multiobjective optimization — Andre Opris, the University of Passau, Passau, Germany

Evolutionary algorithms are popular algorithms for multiobjective optimisation (also called Pareto optimisation) as they use a population to store trade-offs between different objectives. Despite their popularity, the theoretical foundation of multiobjective evolutionary optimisation (EMO) is still in its early development. Fundamental questions such as the benefits of different crossover operators are still not fully understood. We provide a theoretical analysis of the well-known EMO algorithms GSEMO and NSGA-II to showcase the possible advantages of crossover: For one-point crossover and uniform crossover each we propose a class of functions on which these EMO algorithms with crossover find the Pareto set in expected polynomial time. In sharp contrast they and many other EMO algorithms without crossover require expected exponential time to optimise these function classes, and in most of the cases to even find a single Pareto-optimal point.

28 November 2023, 12:00 UTC (one hour earlier!)

Biased Continuous Algorithms: How to Take Them to the Grave? — Maxim Buzdalov, Aberystwyth University, Aberystwyth, Wales

When designing a black-box optimizer, we normally want it to be robust enough to solve isomorphic problems equally well. Some other people, however, build their careers on tailoring their algorithms to counterfeit benchmarks using hidden questionable design patterns and claiming their superiority over other algorithms. With a focus on continuous optimization, we discuss some of these patterns and, in particular, present a tool that helps discover probably the most offending class: the origin-biased algorithms.

5 December 2023

Analysing the Robustness of NSGA-II under Noise — Duc Cuong Dang, the University of Passau, Passau, Germany

Runtime analysis has produced many results on the efficiency of simple evolutionary algorithms like the (1+1) EA, and its analogue called GSEMO in evolutionary multiobjective optimisation (EMO). Recently, the first runtime analyses of the famous and highly cited EMO algorithm NSGA-II have emerged, demonstrating that practical algorithms with thousands of applications can be rigorously analysed. However, these results only show that NSGA-II has the same performance guarantees as GSEMO and it is unclear how and when NSGA-II can outperform GSEMO.

We study this question in noisy optimisation and consider a noise model that adds large amounts of posterior noise to all objectives with some constant probability p per evaluation. We show that GSEMO fails badly on every noisy fitness function as it tends to remove large parts of the population indiscriminately. In contrast, NSGA-II is able to handle the noise efficiently on LeadingOnesTrailingZeroes when p < 1/2, as the algorithm is able to preserve useful search points even in the presence of noise. We identify a phase transition at p = 1/2 where the expected time to cover the Pareto front changes from polynomial to exponential. To our knowledge, this is the first proof that NSGA-II can outperform GSEMO and the first runtime analysis of NSGA-II in noisy optimisation.

This is a joint work with Andre Opris, Bahare Salehi and Dirk Sudholt.

12 December 2023

Stochastic Population Update Can Provably Be Helpful in Multi-Objective Evolutionary Algorithms — Chao Qian, School of Artificial Intelligence, Nanjing University, China

Evolutionary algorithms (EAs) have been widely and successfully applied to solve multi-objective optimization problems, due to their nature of population-based search. Population update is a key component in multi-objective EAs (MOEAs), and it is performed in a greedy, deterministic manner. That is, the next-generation population is formed by selecting the first population-size ranked solutions (based on some selection criteria, e.g., non-dominated sorting, crowdedness and indicators) from the collections of the current population and newly-generated solutions. In this work, we question this practice. We analytically present that introducing randomness into the population update procedure in MOEAs can be beneficial for the search. More specifically, we prove that the expected running time of two well-established MOEAs, SMS-EMOA (2009 citations in Google Scholar) and NSGA-II (47549 citations in Google Scholar), for solving two biobjective problems, OneJumpZeroJump (introduced by [Doerr and Zheng, AAAI’21]) and bi-objective RealRoyalroad (introduced by [Dang et al., AAAI’23]), can be exponentially decreased if replacing its deterministic population update mechanism by a stochastic one. Empirical studies also verify the effectiveness of stochastic population update. This work is an attempt to challenge a common practice for the population update in MOEAs. Its positive results, which might hold more generally, should encourage the exploration of developing new MOEAs in the area.

This work significantly extends our preliminary work published at IJCAI’23, which only considers SMS-EMOA solving the OneJumpZeroJump problem. In this version, we consider SMS-EMOA and NSGA-II solving the OneJumpZeroJump and bi-objective RealRoyalroad problems, providing more evidence for the effectiveness of stochastic population update.

19 December 2023

How to Use the Metropolis Algorithm for Multi-Objective Optimization? — Mingfeng Li, Harbin Institute of Technology, Shenzhen, China

The Metropolis algorithm can cope with local optima by accepting inferior solutions with suitable small probability. That this can work well was not only observed in empirical research, but also via mathematical runtime analyses on single-objective benchmarks. This paper takes several steps towards understanding, again via theoretical means, whether such advantages can also be obtained in multi-objective optimization.

The original Metropolis algorithm has two components, one-bit mutation and the acceptance strategy which allows accepting inferior solutions. When adjusting the acceptance strategy to multi-objective optimization in the way that an inferior solution that is accepted replaces its parent, then the Metropolis algorithm is not very efficient on our multi-objective version of the multimodal DLB benchmark called DLTB. With one-bit mutation, this multi-objective Metropolis algorithm cannot optimize the DLTB problem, with standard bit-wise mutation it needs at least Ω(n5) time to cover the full Pareto front. We also show that many other multi-objective optimizers, namely the GSEMO, SMS-EMOA, and NSGA-II, only need time O(n4).

When keeping the parent when an inferior point is accepted, the multi-objective Metropolis algorithm both with one-bit or standard bit-wise mutation solves the DLTB problem efficiently, with one-bit mutation experimentally leading to better results than several other algorithms.

Overall, our work suggests that the general mechanism of the Metropolis algorithm can be interesting in multi-objective optimization, but that the implementation details can have a huge impact on the performance.

This is a joint work with Renzhong Deng, Benjamin Doerr, and Weijie Zheng.