Positions

We expect that there will be more openings for PhD students and postdocs in the near future. If you are interested in working with us, please contact Jendrik Seipp.

Theses

We offer interesting topics for master theses in the field of AI/ML, often with a focus on AI planning. If you are interested, please contact us by email, preferably including a short CV. Together, we will then develop a topic that suits your interest and background. Here are some examples of possible topics:

Comparing deep reinforcement learning and classical planning for solving puzzles

A recent paper developed the ML pipeline DeepCubeA that learns how to solve Rubik's cube tasks (and other puzzles). DeepCubeA compares favorably against optimal classical planning algorithms in experiments, but in contrast to the optimal baselines, DeepCubeA doesn't guarantee to find optimal solutions. To obtain a fair comparison, we want to evaluate DeepCubeA against several suboptimal algorithms from the literature. For this evaluation we'll use algorithms that are bounded suboptimal or improve their found solutions over time, in order to come close to optimal solutions. In addition to Rubik's Cube tasks, we will use tasks from Lights Out, n-puzzle and Sokoban.

Automatically detecting and fixing errors in planning models

The performance of modern planning systems depends on many factors. One important factor is whether the input planning tasks contain redundant information. An example for this are unused action parameters. These are present in many planning benchmarks and can result in many duplicate ground actions. This thesis aims to fix standard benchmark sets using a validation tool developed within our lab and to run an empirical analysis to test whether this improves the performance of existing planning systems.

Evolution of Planners

Since more than three decades researchers have been developing AI planners. However, since each research paper uses a different set of benchmarks, hardware and planners, it's hard to judge the progress of the AI planning field. In this project, we want to explore how planner performance has evolved over time. For this, we will select the most important planners and run them on the same set of benchmarks from past International Planning Competitions. The main challenge will be to obtain, compile and run the old planners on modern hardware. Luckily, container technology such as Docker and Apptainer can help with this.

Beyond shortest paths: maximizing rewards in classical planning

Planning is a fundamental aspect of artificial intelligence and involves devising a sequence of actions to guide an intelligent agent from its current state to a goal state. Classical planning typically seeks a cost-optimal plan, where actions have associated costs, with the objective of minimizing the total cost of the sequence. In practice, some problems are better addressed by maximizing rewards associated with actions, spanning applications as diverse as computational linguistics, power grid reconfiguration, and error-correcting code theory. This project focuses on exploring the challenge of finding a reward-optimal plan or, in a unit cost setting, the longest plan. For a more detailed description of the project, see the attached document.

Visualizing action plans with generative machine learning models

Generative models are now powerful enough to generate realistic images and videos from natural language texts (see e.g. Du et al.). In this project we want to test whether such deep learning models can be used to generate visualizations of action plans obtained with AI planners. The goal is to generate visualizations that are easy to understand for humans and that can be used to improve the plans.

Using LLMs to convert logic formulas to natural language

Large language models (LLMs) such as GPT-4 can generate natural language text that is indistinguishable from human-written text. In this project, we want to explore whether LLMs can be used to convert logic formulas to natural language. This will be useful for explaining the meaning of logic formulas to non-experts, for example in the context of AI planning. For example, for the first-order logic formula \forall x \exists y: on(x,y), the LLM should generate a sentence like "For every object x, there is an object y such that x is on y". We will focus on Description Logic formulas generated by the DLPlan system developed within our group.

Creating an email writing assistant

We want to compare different offline and online language models for the task of completing emails within the Thunderbird email client. Important challenges here will be to figure out how complex language models need to be and how much information from previous conversations we need to feed them to provide useful assistance. We want to find out whether we can obtain a freely-available privacy-preserving writing assistant. The ultimate aim is to obtain a Thunderbird add-on that assists users around the world in writing emails.

Completed Theses

Learning Partial Policies for Intractable Domains on Tractable Subsets

Viktor Carlsson  

Master's thesis, August 2023. Download: (PDF) (DiVA)


Compact Representations of State Sets in State Space Search

Hugo Axandersson  

Master's thesis, June 2023. Download: (PDF) (DiVA)