Reinforcement Learning from Delayed Observations via World Models

By Armin Karamzade, Kyungmin Kim, Montek Kalsi, and Roy Fox

Reinforcement Learning Journal, vol. 5, 2024, pp. 2123–2139.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

In standard reinforcement learning settings, agents typically assume immediate feedback about the effects of their actions after taking them. However, in practice, this assumption may not hold true due to physical constraints and can significantly impact the performance of learning algorithms. In this paper, we address observation delays in partially observable environments. We propose leveraging world models, which have shown success in integrating past observations and learning dynamics, to handle observation delays. By reducing delayed POMDPs to delayed MDPs with world models, our methods can effectively handle partial observability, where existing approaches achieve sub-optimal performance or degrade quickly as observability decreases. Experiments suggest that one of our methods can outperform a naive model-based approach by up to 250%. Moreover, we evaluate our methods on visual delayed environments, for the first time showcasing delay-aware reinforcement learning continuous control with visual observations.


Citation Information:

Armin Karamzade, Kyungmin Kim, Montek Kalsi, and Roy Fox. "Reinforcement Learning from Delayed Observations via World Models." Reinforcement Learning Journal, vol. 5, 2024, pp. 2123–2139.

BibTeX:

@article{karamzade2024reinforcement,
    title={Reinforcement Learning from Delayed Observations via World Models},
    author={Karamzade, Armin and Kim, Kyungmin and Kalsi, Montek and Fox, Roy},
    journal={Reinforcement Learning Journal},
    volume={5},
    pages={2123--2139},
    year={2024}
}