BetaZero: Belief-State Planning for Long-Horizon POMDPs using Learned Approximations

By Robert J. Moss, Anthony Corso, Jef Caers, and Mykel Kochenderfer

Reinforcement Learning Journal, vol. 1, 2024, pp. 158–181.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Real-world planning problems, including autonomous driving and sustainable energy applications like carbon storage and resource exploration, have recently been modeled as partially observable Markov decision processes (POMDPs) and solved using approximate methods. To solve high-dimensional POMDPs in practice, state-of-the-art methods use online planning with problem-specific heuristics to reduce planning horizons and make the problems tractable. Algorithms that learn approximations to replace heuristics have recently found success in large-scale fully observable domains. The key insight is the combination of online Monte Carlo tree search with offline neural network approximations of the optimal policy and value function. In this work, we bring this insight to partially observable domains and propose BetaZero, a belief-state planning algorithm for high-dimensional POMDPs. BetaZero learns offline approximations that replace heuristics to enable online decision making in long-horizon problems. We address several challenges inherent in large-scale partially observable domains; namely challenges of transitioning in stochastic environments, prioritizing action branching with a limited search budget, and representing beliefs as input to the network. To formalize the use of all limited search information, we train against a novel $Q$-weighted visit counts policy. We test BetaZero on various well-established POMDP benchmarks found in the literature and a real-world problem of critical mineral exploration. Experiments show that BetaZero outperforms state-of-the-art POMDP solvers on a variety of tasks.


Citation Information:

Robert J Moss, Anthony Corso, Jef Caers, and Mykel Kochenderfer. "BetaZero: Belief-State Planning for Long-Horizon POMDPs using Learned Approximations." Reinforcement Learning Journal, vol. 1, 2024, pp. 158–181.

BibTeX:

@article{moss2024betazero,
    title={{BetaZero}: {B}elief-State Planning for Long-Horizon {POMDP}s using Learned Approximations},
    author={Moss, Robert J. and Corso, Anthony and Caers, Jef and Kochenderfer, Mykel},
    journal={Reinforcement Learning Journal},
    volume={1},
    pages={158--181},
    year={2024}
}