About
I am a final year PhD student in the NLP CDT at the University of Edinburgh. My supervisors are Professor Kartic Subr and Professor Mirella Lapata. I work with the post-doc Kevin Denamganai with funding from ELAI (The Edinburgh Laboratory for Integrated Artificial Intelligence).
Research
I work on improving language models’ understanding of complex physical systems, with a particular focus on explainability and causal reasoning in 3D simulation environments. My recent work studies how human-interpretable patterns can be learned from simulation traces and used as structured representations of physical behaviour, enabling language models to reason more effectively about dynamics, outcomes, and interventions. My broader research interests include language models, physics simulation, explainable AI, causal modelling, physical reasoning, and neuro-symbolic methods.
Latest Work
Discovering High Level Patterns from Simulation Traces
This work investigates how high-level, human-interpretable patterns can be automatically discovered from simulation traces using LLM-based evolutionary programming. Rather than relying only on low-level state trajectories, the method learns structured descriptions of meaningful physical behaviours, providing a semantic interface between simulation data and language models. This improves the ability of language models to understand complex environments and supports more explainable and causally informative reasoning about physical systems.
CueTip: An Interactive and Explainable Physics-aware Pool Assistant
Project page
Venue: SIGGRAPH 2025
CueTip integrates a language model directly into a pool simulation. Our method allows the language model to act as a facilitator between a user and the system, allowing for the user to ask questions and receive explanations about the game and suggested shots. We train a neural surrogate of traditional pool agents in order to optimize the shot parameters so that the events given by the shot match the events requested by the language model. The events given by the simulator are in natural language, allowing the LM to generate explanations of the physics system that are grounded in real simulation events.
xInv: Explainable Optimization of Inverse Problems
xInv aims to generate grounded explanations of inverse problems. We focus on two example problems: optimization of a differentiable physics simulation, and the training of a small language model. Our method utilises natural language descriptions of Events (the output of the system during rollouts), Rewards (the optimization objective), and Updates (the changes to the parameters of the system during an optimization step). These are then used to generate explanations of the optimization process.
Highlighted Publications
Discovering High Level Patterns from Simulation Traces
Published:
Venue: Arxiv
URL: https://arxiv.org/abs/2602.10009
Authors: Sean Memery, Kartic Subr
xInv: Explainable Optimization of Inverse Problems
Published:
Venue: Arxiv
URL: https://arxiv.org/abs/2505.64677
Authors: Sean Memery, Kevin Denamganai, Anna Kapron-King, Kartic Subr
CueTip: An Interactive and Explainable Physics-aware Pool Assistant
Published:
Venue: SIGGRAPH 2025
URL: https://dl.acm.org/doi/10.1145/3721238.3730742
Authors: Sean Memery, Kevin Denamganai, Jiaxin Zhang, Zehai Tu, Yiwen Guo, Kartic Subr
SimLM: Can Language Models Infer Parameters of Physical Systems?
Published:
Venue: Arxiv
URL: https://arxiv.org/abs/2312.14215
Authors: Sean Memery, Mirella Lapata, Kartic Subr
Talks
CueTip: An Interactive and Explainable Physics-aware Pool Assistant
August 13, 2025, SIGGRAPH 2025
Language Model Reasoning aided by Physical Simulators
February 23, 2024, Trinity College Dublin, Ireland
