The Limits of Pure Exploration in POMDPs: When the Observation Entropy is Enough

By Riccardo Zamboni, Duilio Cirino, Marcello Restelli, and Mirco Mutti

Reinforcement Learning Journal, vol. 2, 2024, pp. 676–692.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

The problem of pure exploration in Markov decision processes has been cast as maximizing the entropy over the state distribution induced by the agent's policy, an objective that has been extensively studied. However, little attention has been dedicated to state entropy maximization under partial observability, despite the latter being ubiquitous in applications, e.g., finance and robotics, in which the agent only receives noisy observations of the true state governing the system's dynamics. How can we address state entropy maximization in those domains? In this paper, we study the simple approach of maximizing the entropy over observations in place of true latent states. First, we provide lower and upper bounds to the approximation of the true state entropy that only depends on some properties of the observation function. Then, we show how knowledge of the latter can be exploited to compute a principled regularization of the observation entropy to improve performance. With this work, we provide both a flexible approach to bring advances in state entropy maximization to the POMDP setting and a theoretical characterization of its intrinsic limits.


Citation Information:

Riccardo Zamboni, Duilio Cirino, Marcello Restelli, and Mirco Mutti. "The Limits of Pure Exploration in POMDPs: When the Observation Entropy is Enough." Reinforcement Learning Journal, vol. 2, 2024, pp. 676–692.

BibTeX:

@article{zamboni2024limits,
    title={The Limits of Pure Exploration in {POMDP}s: {W}hen the Observation Entropy is Enough},
    author={Zamboni, Riccardo and Cirino, Duilio and Restelli, Marcello and Mutti, Mirco},
    journal={Reinforcement Learning Journal},
    volume={2},
    pages={676--692},
    year={2024}
}