Reinforcement Learning Journal, vol. 1, 2024, pp. 450–469.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
A core ambition of reinforcement learning (RL) is the creation of agents capable of rapid learning in novel tasks. Meta-RL aims to achieve this by directly learning such agents. Black box methods do so by training off-the-shelf sequence models end-to-end. By contrast, task inference methods explicitly infer a posterior distribution over the unknown task, typically using distinct objectives and sequence models designed to enable task inference. Recent work has shown that task inference methods are not necessary for strong performance. However, it remains unclear whether task inference sequence models are beneficial even when task inference objectives are not. In this paper, we present evidence that task inference sequence models are indeed still beneficial. In particular, we investigate sequence models with permutation invariant aggregation, which exploit the fact that, due to the Markov property, the task posterior does not depend on the order of data. We empirically confirm the advantage of permutation invariant sequence models without the use of task inference objectives. However, we also find, surprisingly, that there are multiple conditions under which permutation variance remains useful. Therefore, we propose SplAgger, which uses both permutation variant and invariant components to achieve the best of both worlds, outperforming all baselines evaluated on continuous control and memory environments. Code is provided at https://github.com/jacooba/hyper.
Jacob Beck, Matthew Thomas Jackson, Risto Vuorio, Zheng Xiong, and Shimon Whiteson. "SplAgger: Split Aggregation for Meta-Reinforcement Learning." Reinforcement Learning Journal, vol. 1, 2024, pp. 450–469.
BibTeX:@article{beck2024splagger,
title={{SplAgger}: {S}plit Aggregation for Meta-Reinforcement Learning},
author={Beck, Jacob and Jackson, Matthew Thomas and Vuorio, Risto and Xiong, Zheng and Whiteson, Shimon},
journal={Reinforcement Learning Journal},
volume={1},
pages={450--469},
year={2024}
}