Harnessing Discrete Representations for Continual Reinforcement Learning

By Edan Jacob Meyer, Adam White, and Marlos C. Machado

Reinforcement Learning Journal, vol. 2, 2024, pp. 606–628.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Reinforcement learning (RL) agents make decisions using nothing but observations from the environment, and consequently, rely heavily on the representations of those observations. Though some recent breakthroughs have used vector-based categorical representations of observations, often referred to as discrete representations, there is little work explicitly assessing the significance of such a choice. In this work, we provide a thorough empirical investigation of the advantages of discrete representations in the context of world-model learning, model-free RL, and ultimately continual RL problems, where we find discrete representations to have the greatest impact. We find that, when compared to traditional continuous representations, world models learned over discrete representations accurately model more of the world with less capacity, and that agents trained with discrete representations learn better policies with less data. In the context of continual RL, these benefits translate into faster adapting agents. Additionally, our analysis suggests that it is the binary and sparse nature, rather than the “discreteness” of discrete representations that leads to these improvements.


Citation Information:

Edan Jacob Meyer, Adam White, and Marlos C Machado. "Harnessing Discrete Representations for Continual Reinforcement Learning." Reinforcement Learning Journal, vol. 2, 2024, pp. 606–628.

BibTeX:

@article{meyer2024harnessing,
    title={Harnessing Discrete Representations for Continual Reinforcement Learning},
    author={Meyer, Edan Jacob and White, Adam and Machado, Marlos C.},
    journal={Reinforcement Learning Journal},
    volume={2},
    pages={606--628},
    year={2024}
}