ThRaSH Seminars Series

About ThRaSH

Randomized search heuristics such as stochastic gradient methods, simulated annealing, evolutionary algorithms, stochastic neural networks for optimization, ant colony and swarm optimization, and the cross-entropy method are frequently used across many scientific communities. They have been successfully applied in various domains, both for combinatorial and numerical optimization. Despite their success in practice, proving that such algorithms satisfy certain performance guarantees is still a difficult and widely open problem.

The mission of the theory of randomized search heuristics (ThRaSH) seminar series is to contribute to the theoretical understanding of randomized search heuristics, in particular of their computational complexity. The aim is to stimulate interactions within the research field and between people from different disciplines working on randomized algorithms. The primary focus is on discussing recent ideas and detecting challenging topics for future work, rather than on the presentation of final results.

Steering Committee (Alphabetic order)

Talks in Autumn/Winter 2024

When? On Tuesday at 15:00 CE(S)T. This is UTC 13:00 for 22th October and UTC 14:00 after that.

Where? Here is a permanent zoom link.

Mailing list? Subscribe to the seminar mailing list here. If you want to get all the ThRaSH-related news, subscribe to this list as well.

Organizers: Andre Opris and Per Kristian Lehre.

Schedule

Abstracts of the talks

22 October 2024

First Running Time Analysis of the Strength Pareto Evolutionary Algorithm 2 (SPEA2) Shengjie Ren, School of Artificial Intelligence, Nanjing University, China

Evolutionary algorithms (EAs) have emerged as a predominant approach for addressing multi-objective optimization problems. However, the theoretical foundation of multi-objective EAs (MOEAs), particularly the fundamental aspects like running time analysis, remains largely underexplored. Existing theoretical studies mainly focus on basic MOEAs, with little attention given to practical MOEAs. Recently, researchers have begun to examine practical MOEAs, e.g., NSGA-II/III, MOEA/D, and SMS-EMOA. However, the running time analysis of Strength Pareto Evolutionary Algorithm 2 (SPEA2), one of the most popular MOEAs, has not been touched.

In this talk, we will present the first theoretical analysis of the running time for SPEA2 on three commonly studied multi-objective optimization problems: m-OneMinMax, m-LeadingOnesTrailingZeroes, and m-OneJumpZeroJump. We show that SPEA2 achieves expected running times of O(µn min{mlog(n)}, O(µn2), and O(µnkmin{mn,3m/2}) for these problems respectively, where m is the number of objectives, and µ is the population size. The proofs are accomplished through general theorems which are also applicable for analyzing the expected running time of other MOEAs on these problems, and thus can be helpful for future theoretical analysis of MOEAs.

29 October 2024

Runtime Analysis of Coevolutionary Algorithms on a Class of Symmetric Zero-Sum Games Alistair Benford, School of Computer Science, University of Birmingham, UK

Coevolutionary algorithms (CoEAs) have a natural application to game playing. While this application has been extensively studied empirically, corresponding runtime analyses so far apply to only a few settings, leaving great potential for further theoretical investigation. Many games, whether enjoyed by human players or describing real-world competition, are symmetric (rules apply the same to all players) and zero-sum (one player's gain is another's loss).

With a view to advancing theoretical analysis of CoEAs, we consider a class of symmetric zero-sum games for which the role of population diversity is pivotal to an algorithm’s success. We prove that a broad class of algorithms which do not utilise a population have superpolynomial runtime for this class. In the other direction we prove that, with high probability, a coevolutionary instance of the UMDA finds the unique Nash equilibrium in time O(n(log(n))2). Together, these results indicate the importance of generating diverse search points for evolving better strategies.

In this talk we present these results, and additionally consider further runtime analysis for the performance of CoEAs on impartial combinatorial games.

5 November 2024

Finding the Hidden Subset in Hidden Subset Problems Jon Rowe, University of Birmingham, UK

A hidden subset is defined on n bits, embedded in N bits (where N is typically much larger than n). The remaining N-n bits are neutral and do not affect the fitness of a string. There exist evolutionary algorithms which find an optimal string in time that depends only on n and not on N for certain hidden subset problems. However, there seems to be no way for the algorithm to know when it has found the solution, nor can the relevant bits be identified once it has done so. We address this problem by running log(N) copies of the algorithm in parallel and taking a vote for the value in each bit position. Along the way, we prove a couple of lemmas of potentially broad application. The first gives an easy way of obtaining a bound on the time taken for a number of parallel runs of a process to complete. The second states that the probability a (1+1)EA, with any mutation scheme, produces an incorrect bit value, for weakly monotonic functions with a unique optimum, is no greater than 1/2, at each time step.

12 November 2024

Overcoming Binary Adversarial Optimisation with Competitive Coevolution Shishen Lin, School of Computer Science, University of Birmingham, UK

Co-evolutionary algorithms (CoEAs), which pair candidate designs with test cases, are frequently used in adversarial optimisation, particularly for binary test-based problems where designs and tests yield binary outcomes. The effectiveness of designs is determined by their performance against tests, and the value of tests is based on their ability to identify failing designs, often leading to more sophisticated tests and improved designs. However, CoEAs can exhibit complex, sometimes pathological behaviours like disengagement. Through runtime analysis, we aim to rigorously analyse whether CoEAs can efficiently solve test-based adversarial optimisation problems in expected polynomial runtime. This paper carries out the first rigorous runtime analysis of the (1,λ)CoEA for binary test-based adversarial optimisation problems. In particular, we introduce a binary test-based benchmark problem called "Diagonal Problem" and initiate the first runtime analysis of this competitive CoEA on this problem. The mathematical analysis shows that the (1,λ)CoEA can efficiently find an ε-approximation to the optimal solution of the "Diagonal Problem", i.e. in expected polynomial time assuming sufficiently low mutation rates and a large offspring population size. On the other hand, the standard (1,λ)EA fails to find an ε-approximation to the optimal solution of the "Diagonal Problem" in polynomial time. This illustrates the potential of coevolution for solving binary adversarial optimisation problems.

19 November 2024

Growing a Population of Feasible Subgraphs Andrew Sutton, University of Minnesota Duluth, USA

A parameterized graph problem consists of a graph G = (V,E) for which the goal is to identify a small set of vertices subject to a feasibility criterion (e.g., a vertex cover). Many existing evolutionary algorithms evolve populations of candidate vertex sets, and handle infeasibility via a penalty term incorporated into the fitness function. In this talk, we will discuss a different mechanism for tackling this type of problem in which the population consists of already-feasible subgraphs of the original graph G. The task of the EA is then to evolve increasingly larger feasible subgraphs until a solution to the original problem is found. Using a generalization of uniform crossover together with mutation and a problem-specific constraint repair operation inspired by the algorithmic technique of iterative compression, this approach can solve arbitrary k-vertex cover problems in fixed-parameter tractable time, meaning the runtime scales only polynomially with the size of G (but superpolynomially in k). Finally, I will discuss how this approach could also be used in benchmarking applications for generating graph instances that are provably maximal k-coverable subgraphs of an arbitrary graph.

26 November 2024

Evolutionary Diversity Optimization: Recent Advances and Existing Challenges Denis Antipov, University of Adelaide, Australia

Diversity optimization is the problem of finding a diverse set of good solutions for an optimization problem. It arises from real-world problems, in which the constraints cannot be strictly formalized, hence a single best solution might occur infeasible. However, having a diverse set of good solutions minimizes the probability that all those solutions do not satisfy the constraints. The problem of finding a diverse set of solutions is much more complicated than a single-objective optimization, since its dimensionality is much higher. Despite this, evolutionary algorithms (EAs) have been shown to be good solvers of diversity optimization problems. In this talk we will discuss recent results in the area of evolutionary diversity optimization, in particular the landscape of such problems on the example of the vertex cover problem. We will also address the open questions in this area, in which theory can immensely help the practitioners.

3 December 2024

Proven Runtime Guarantees for How the MOEA/D Computes the Pareto Front From the Subproblem Solutions Martin Krejca, Ecole Polytechnique, France

The decomposition-based multi-objective evolutionary algorithm (MOEA/D) does not directly optimize a given multi-objective function f, but instead optimizes (N+1) single-objective subproblems of f in a co-evolutionary manner. It maintains an archive of all non-dominated solutions found and outputs it as approximation to the Pareto front. Once the MOEA/D found all optima of the subproblems (the g-optima), it may still miss Pareto optima of f. The algorithm is then tasked to find the remaining Pareto optima directly by mutating the g-optima.

In this talk, we discuss how the MOEA/D with only standard mutation operators computes the whole Pareto front of the OneMinMax benchmark when the g-optima are a strict subset of the Pareto front. For standard bit mutation, we show an expected runtime of O(n*N*log(n)+nn/(2N)*N*log(n)) function evaluations. Especially for the second, more interesting phase when the algorithm start with all g-optima, we have an Ω(n(1/2)*(n/N+1)*N1/2*2-n/N) expected runtime. This runtime is super-polynomial if N = o(n), since this leaves large gaps between the g-optima, which require costly mutations to cover.

For power-law mutation with exponent b from the interval (1,2), we prove an expected runtime of O(n*N*log(n)+nb*log(n)) function evaluations. The O(nb*log(n)) term stems from the second phase of starting with all g-optima, and it is independent of the number of subproblems N. This leads to a huge speedup compared to the lower bound for standard bit mutation. In general, our overall bound for power-law suggests that the MOEA/D performs best for N = O(nb-1), resulting in an O(nb*log(n)) bound. In contrast to standard bit mutation, smaller values of N are better for power-law mutation, as it is capable of easily creating missing solutions.