An Optimisation Framework for Unsupervised Environment Design

By Nathan Monette, Alistair Letcher, Michael Beukman, Matthew Thomas Jackson, Alexander Rutherford, Alexander David Goldie, and Jakob Nicolaus Foerster

Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.


Download:

Abstract:

For reinforcement learning agents to be deployed in high-risk settings, they must achieve a high level of robustness to unfamiliar scenarios. One method for improving robustness is unsupervised environment design (UED), a suite of methods aiming to maximise an agent's generalisability across configurations of an environment. In this work, we study UED from an optimisation perspective, providing stronger theoretical guarantees for practical settings than prior work. Whereas previous methods relied on guarantees *if* they reach convergence, our framework employs a nonconvex-strongly-concave objective for which we provide a *provably convergent* algorithm in the zero-sum setting. We empirically verify the efficacy of our method, outperforming prior methods in a number of environments with varying difficulties.


Citation Information:

Nathan Monette, Alistair Letcher, Michael Beukman, Matthew Thomas Jackson, Alexander Rutherford, Alexander David Goldie, and Jakob Nicolaus Foerster. "An Optimisation Framework for Unsupervised Environment Design." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

BibTeX:
@article{monette2025optimisation,
    title={An Optimisation Framework for Unsupervised Environment Design},
    author={Monette, Nathan and Letcher, Alistair and Beukman, Michael and Jackson, Matthew Thomas and Rutherford, Alexander and Goldie, Alexander David and Foerster, Jakob Nicolaus},
    journal={Reinforcement Learning Journal},
    year={2025}
}