Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.
Recent work proposes to use world models as controlled virtual environments in which AI agents can be tested before deployment to ensure their reliability and safety. However, accurate world models often have high computational demands that can severely restrict the scope and depth of such assessments. Inspired by the classic ‘brain in a vat’ thought experiment, here we investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation. By following principles from computational mechanics, our approach reveals a fundamental trade-off in world model construction between efficiency and interpretability, demonstrating that no single world model can optimise all desirable characteristics. Building on this trade-off, we identify procedures to build world models that either minimise memory requirements, delineate the boundaries of what is learnable, or allow tracking causes of undesirable outcomes. In doing so, this work establishes fundamental limits in world modelling, leading to actionable guidelines that inform core design choices related to effective agent evaluation.
Fernando Rosas, Alexander Boyd, and Manuel Baltieri. "AI in a vat: Fundamental limits of efficient world modelling for agent sandboxing and interpretability." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
BibTeX:@article{rosas2025fundamental,
title={{AI} in a vat: {F}undamental limits of efficient world modelling for agent sandboxing and interpretability},
author={Rosas, Fernando and Boyd, Alexander and Baltieri, Manuel},
journal={Reinforcement Learning Journal},
year={2025}
}