Posterior Sampling for Continuing Environments

By Wanqiao Xu, Shi Dong, and Benjamin Van Roy

Reinforcement Learning Journal, vol. 1, no. 1, 2024, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Existing posterior sampling algorithms for continuing reinforcement learning (RL) rely on maintaining state-action visitation counts, making them unsuitable for complex environments with high-dimensional state spaces. We develop the first extension of posterior sampling for RL (PSRL) that is suited for a continuing agent-environment interface and integrates naturally into scalable agent designs. Our approach, continuing PSRL (CPSRL), determines when to resample a new model of the environment from the posterior distribution based on a simple randomization scheme. We establish an $\tilde{O}(\tau S \sqrt{A T})$ bound on the Bayesian regret in the tabular setting, where $S$ is the number of environment states, $A$ is the number of actions, and $\tau$ denotes the {\it reward averaging time}, which is a bound on the duration required to accurately estimate the average reward of any policy. Our work is the first to formalize and rigorously analyze this random resampling approach. Our simulations demonstrate CPSRL's effectiveness in high-dimensional state spaces where traditional algorithms fail.


Citation Information:

Wanqiao Xu, Shi Dong, and Benjamin Van Roy. "Posterior Sampling for Continuing Environments." Reinforcement Learning Journal, vol. 1, no. 1, 2024, pp. TBD.

BibTeX:

Note: Manually check this automatically generated text (particularly capitalization in the title and first-last splits of names).

@article{xu2024posterior,
    title={Posterior Sampling for Continuing Environments},
    author={Xu, Wanqiao and Dong, Shi and Roy, Benjamin Van},
    journal={Reinforcement Learning Journal},
    volume={1},
    issue={1},
    year={2024}
}