AI for Attack Identification, Response and Recovery (AIR²)
This WASP NEST project is a collaboration between four Swedish research groups. The principle investigator for AIR² is Simin Nadjm-Tehrani from Linköping University and the co-PIs are Monowar Bhuyan from Umeå University, Jendrik Seipp from Linköping University and Rolf Stadler from KTH Stockholm.
In this project, we use AI to enhance cybersecurity. Our goal is to develop resource-efficient systems that can operate with high autonomy even in hostile communication networks. We aim to explore the pros and cons of using machine learning for improving resource efficiency in closed-loop automation. Our focus will be primarily on creating high-quality software communication infrastructures that
- prevent cyberthreats by predicting and reducing their impact,
- accurately identify ongoing attacks using basic techniques and AI,
- respond to attacks in an automated and understandable way, and
- quickly recover from attacks by adapting to new circumstances.
The project is split into five work packages (WPs) and our lab leads WP2:
Learning Interpretable Models for Identifying Attacks and Countermeasures
Recent advancements in model-based planning have turned AI planning systems into powerful domain-independent sequential decision makers (Hoffmann and Nebel 2001; Richter and Westphal 2010; Seipp, Keller, and Helmert 2020). Nowadays, the primary bottleneck lies in accurate modeling rather than planning itself. Many current approaches for learning symbolic planning models from execution traces fall short because they assume full observability and no noise, thus limiting their scalability in real-world applications such as large communication systems (Arora et al. 2018; Lamanna et al. 2021). While reinforcement learning algorithms can learn policies without explicit modeling (Sutton and Barto 1998), they often face issues with sparse rewards, changes in their environment, and a lack of interpretability in the resulting policies.
Within WP2 we aim to merge the strengths of both approaches: learning symbolic planning models from data while employing RL to explore, but not control, an unknown state space. We will train RL agents to prioritize the exploration of lesser-known state space areas and use classical symbolic modeling to piece together the gathered information into a coherent planning model. To expand our models to large communication systems, we will also address issues related to partial observability and noisy sensor data.
Our research will contribute to the development of a semi-automatic network hardening loop that uses learned models to plan and respond to cybersecurity attacks. In detail, we will use off-the-shelf state-of-the-art planners to identify network attacks and modify network configurations accordingly to prevent future breaches. By continuously updating the learned models, we will iteratively identify and counter new attacks. In addition, we will use Stackelberg planning (Speicher et al. 2018) to simultaneously discover attacks and countermeasures.
PI for WP2: Jendrik Seipp
Core team for WP2: One PhD student, Rolf Stadler (KTH)
Funding: This project is supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.