Reinforcement Learning Journal, vol. 4, 2024, pp. 1793–1821.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
We study the problem of Distributionally Robust Constrained RL (DRC-RL), where the goal is to maximize the expected reward subject to environmental distribution shifts and constraints. This setting captures situations where training and testing environments differ, and policies must satisfy constraints motivated by safety or limited budgets. Despite significant progress toward algorithm design for the separate problems of distributionally robust RL and constrained RL, there do not yet exist algorithms with end-to-end convergence guarantees for DRC-RL. We develop an algorithmic framework based on strong duality that enables the first efficient and provable solution in a class of environmental uncertainties. Further, our framework exposes an inherent structure of DRC-RL that arises from the combination of distributional robustness and constraints, which prevents a popular class of iterative methods from tractably solving DRC-RL, despite such frameworks being applicable for each of distributionally robust RL and constrained RL individually. Finally, we conduct experiments on a car racing benchmark to evaluate the effectiveness of the proposed algorithm.
Zhengfei Zhang, Kishan Panaganti, Laixi Shi, Yanan Sui, Adam Wierman, and Yisong Yue. "Distributionally Robust Constrained Reinforcement Learning under Strong Duality." Reinforcement Learning Journal, vol. 4, 2024, pp. 1793–1821.
BibTeX:@article{zhang2024distributionally,
title={Distributionally Robust Constrained Reinforcement Learning under Strong Duality},
author={Zhang, Zhengfei and Panaganti, Kishan and Shi, Laixi and Sui, Yanan and Wierman, Adam and Yue, Yisong},
journal={Reinforcement Learning Journal},
volume={4},
pages={1793--1821},
year={2024}
}