Reinforcement Learning Journal, vol. 6, 2025, pp. 2342–2367.
Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.
Reinforcement learning agents are fundamentally limited by the quality of the reward functions they learn from, yet reward design is often overlooked under the assumption that a well-defined reward is readily available. However, in practice, designing rewards is difficult, and even when specified, evaluating their correctness is equally problematic: how do we know if a reward function is correctly specified? In our work, we address these challenges by focusing on reward alignment --- assessing whether a reward function accurately encodes the preferences of a human stakeholder. As a concrete measure of reward alignment, we introduce the Trajectory Alignment Coefficient to quantify the similarity between a human stakeholder's ranking of trajectory distributions and those induced by a given reward function. We show that the Trajectory Alignment Coefficient exhibits desirable properties, such as not requiring access to a ground truth reward, invariance to potential-based reward shaping, and applicability to online RL. Additionally, in an $11$--person user study of RL practitioners, we found that access to the Trajectory Alignment Coefficient during reward selection led to statistically significant improvements. Compared to relying only on reward functions, our metric reduced cognitive workload by $1.5$x, was preferred by 82\% of users and increased the success rate of selecting reward functions that produced performant policies by 41\%.
Calarina Muslimani, Kerrick Johnstonbaugh, Suyog Chandramouli, Serena Booth, W. Bradley Knox, and Matthew E Taylor. "Towards Improving Reward Design in RL: A Reward Alignment Metric for RL Practitioners." Reinforcement Learning Journal, vol. 6, 2025, pp. 2342–2367.
BibTeX:@article{muslimani2025towards,
title={Towards Improving Reward Design in {RL}: {A} Reward Alignment Metric for {RL} Practitioners},
author={Muslimani, Calarina and Johnstonbaugh, Kerrick and Chandramouli, Suyog and Booth, Serena and Knox, W. Bradley and Taylor, Matthew E.},
journal={Reinforcement Learning Journal},
volume={6},
pages={2342--2367},
year={2025}
}