Inverse Reinforcement Learning with Multiple Planning Horizons

By Jiayu Yao, Weiwei Pan, Finale Doshi-Velez, and Barbara E Engelhardt

Reinforcement Learning Journal, vol. 3, 2024, pp. 1138–1167.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

In this work, we study an inverse reinforcement learning (IRL) problem where the experts are planning *under a shared reward function but with different, unknown planning horizons*. Without the knowledge of discount factors, the reward function has a larger feasible solution set, which makes it harder for existing IRL approaches to identify a reward function. To overcome this challenge, we develop algorithms that can learn a global multi-agent reward function with agent-specific discount factors that reconstruct the expert policies. We characterize the feasible solution space of the reward function and discount factors for both algorithms and demonstrate the generalizability of the learned reward function across multiple domains.


Citation Information:

Jiayu Yao, Weiwei Pan, Finale Doshi-Velez, and Barbara E Engelhardt. "Inverse Reinforcement Learning with Multiple Planning Horizons." Reinforcement Learning Journal, vol. 3, 2024, pp. 1138–1167.

BibTeX:

@article{yao2024inverse,
    title={Inverse Reinforcement Learning with Multiple Planning Horizons},
    author={Yao, Jiayu and Pan, Weiwei and Doshi-Velez, Finale and Engelhardt, Barbara E},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1138--1167},
    year={2024}
}