Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.
The purpose of continual reinforcement learning is to train an agent on a sequence of tasks such that it learns the ones that appear later in the sequence while retaining the ability to perform the tasks that appeared earlier. Experience replay is a popular method used to make the agent remember previous tasks, but its effectiveness strongly relies on the selection of experiences to store. Kompella et al. (2023) proposed organizing the experience replay buffer into partitions, each storing transitions leading to a rare but crucial event, such that these key experiences get revisited more often during training. However, the method is sensitive to the manual selection of event states. To address this issue, we introduce ProtoCRL, a prototype-based architecture leveraging a variational Gaussian mixture model to automatically discover effective event states and build the associated partitions in the experience replay buffer. The proposed approach is tested on a sequence of MiniGrid environments, demonstrating the agent's ability to adapt and learn new skills incrementally.
Michela Proietti, Peter R Wurman, Peter Stone, and Roberto Capobianco. "ProtoCRL: Prototype-based Network for Continual Reinforcement Learning." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
BibTeX:@article{proietti2025protocrl,
title={{ProtoCRL}: {P}rototype-based Network for Continual Reinforcement Learning},
author={Proietti, Michela and Wurman, Peter R. and Stone, Peter and Capobianco, Roberto},
journal={Reinforcement Learning Journal},
year={2025}
}