Reinforcement Learning Journal, vol. 1, 2024, pp. 400–449.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
Cognitive science and psychology suggest that object-centric representations of complex scenes are a promising step towards enabling efficient abstract reasoning from low-level perceptual features. Yet, most deep reinforcement learning approaches only rely on pixel-based representations that do not capture the compositional properties of natural scenes. For this, we need environments and datasets that allow us to work and evaluate object-centric approaches. In our work, we extend the Atari Learning Environments, the most-used evaluation framework for deep RL approaches, by introducing OCAtari, that performs resource-efficient extractions of the object-centric states for these games. Our framework allows for object discovery, object representation learning, as well as object-centric RL. We evaluate OCAtari's detection capabilities and resource efficiency.
Quentin Delfosse, Jannis Blüml, Bjarne Gregori, Sebastian Sztwiertnia, and Kristian Kersting. "OCAtari: Object-Centric Atari 2600 Reinforcement Learning Environments." Reinforcement Learning Journal, vol. 1, 2024, pp. 400–449.
BibTeX:@article{delfosse2024ocatari,
title={{OCAtari}: {O}bject-Centric {Atari} 2600 Reinforcement Learning Environments},
author={Delfosse, Quentin and Bl{\"{u}}ml, Jannis and Gregori, Bjarne and Sztwiertnia, Sebastian and Kersting, Kristian},
journal={Reinforcement Learning Journal},
volume={1},
pages={400--449},
year={2024}
}