
Explainable Monitoring
The digital revolution is here but we are not ready for it. Artificial intelligence already automates many processes in the real-world applications, such as train operation or financial fraud detection. It is, however, brittle in novel scenarios and does not allow humans to validate its reasoning. This research will marry formal mathematical reasoning with machine learning and visualization to ensure in real time that the deployed algorithms behave the way human experts expect. Explainable Monitoring will make interpretability an intrinsic feature of formal real-time monitoring. (NWO Veni 2024–2027)

RAIL
Combining formal methods and machine learning to quickly react to changes and disruptions in the rail network, in collaboration with The Dutch Railways (NS), ProRail, and Utrecht University. (NWO 2024–2028)
Certification of Stochastic Systems (2023–ongoing)

VeRecycle
The first framework to formally reclaim guarantees for discrete-time stochastic dynamical systems. VeRecycle efficiently reuses probabilistic certificates when the system dynamics deviate only in a given subset of states.

Neural Continuous-Time Supermartingale Certificates
The first neural supermartingale for continuous-time reasoning with provable guarantees, applicable to combinations of reachability, avoidance, and persistence properties.

Composing Reinforcement Learning Policies, with Formal Guarantees
Guarantees are critical in high-stakes scenarios, and we believe drawing on ideas from and incorporating formal verification into RL is an important direction for improving the reliability of learning agents.
VeriXAI (2020–ongoing)

Outside-the-Box
A framework to monitor a neural network by observing the hidden layers.

Verification of Decision Trees
An approach to continuous-time safety verification of decision-tree control policies.

Into the Unknown
A framework to actively monitor a neural-network classifier and automatically adapt to the unknown classes of inputs.
Decision trees (2020–ongoing)

Broccoli
An algorithm to synthesize optimal decision-tree policies given a deterministic black-box environment and specification, a discretization of the tree predicates, and an initial set of states, where optimality is defined with respect to the number of steps to achieve the goal.

TRUST-AI
Advancing novel eXplainable AI (XAI) algorithms that are predominantly based on the specific evolutionary algorithms subtype of Genetic Programming in collaboration with CWI and LUMC.

MurTree
An algorithm for learning optimal decision trees via dynamic programming and search.

Magic Books
An approach to combining reinforcement learning and decision trees for reliable controller design.
Past Projects

ERATO MMSD (Tokyo, 2018)
Meta-heuristic optimization for drone teams.

GRASP Lab UPenn (Philadelphia, 2018)
Meta-heuristic optimization for drone teams.

Stony Brook University (2016–2019)
V-formation control

Vienna Ball of Sciences 2019
A 3D projection mapping installation