How Should We Meta-Learn Reinforcement Learning Algorithms?

By Alexander David Goldie, Zilin Wang, Jakob Nicolaus Foerster, and Shimon Whiteson

Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.


Download:

Abstract:

The process of meta-learning algorithms from data, instead of relying on manual design, is growing in popularity as a paradigm for improving the performance of machine learning systems. Meta-learning shows particular promise for reinforcement learning (RL), where algorithms are often adapted from supervised or unsupervised learning despite their suboptimality for RL. However, until now there has been a severe lack of comparison between different meta-learning algorithms, such as using evolution to optimise over black-box functions or LLMs to propose code. In this paper, we carry out this empirical comparison of the different approaches when applied to a range of meta-learned algorithms, which each target different parts of the RL pipeline. In addition to meta-train and meta-test performance, we also investigate factors including the interpretability, sample cost and train time for each meta-learning algorithm. Based on these findings, we propose several guidelines for meta-learning new RL algorithms which will help ensure that future learned algorithms are as performant as possible.


Citation Information:

Alexander David Goldie, Zilin Wang, Jakob Nicolaus Foerster, and Shimon Whiteson. "How Should We Meta-Learn Reinforcement Learning Algorithms?." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

BibTeX:
@article{goldie2025should,
    title={How Should We Meta-Learn Reinforcement Learning Algorithms?},
    author={Goldie, Alexander David and Wang, Zilin and Foerster, Jakob Nicolaus and Whiteson, Shimon},
    journal={Reinforcement Learning Journal},
    year={2025}
}