Tiered Reward: Designing Rewards for Specification and Fast Learning of Desired Behavior

By Zhiyuan Zhou, Shreyas Sundara Raman, Henry Sowerby, and Michael Littman

Reinforcement Learning Journal, vol. 3, 2024, pp. 1265–1288.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Reinforcement-learning agents seek to maximize a reward signal through environmental interactions. As humans, our job in the learning process is to design reward functions to express desired behavior and enable the agent to learn such behavior swiftly. However, designing good reward functions to induce the desired behavior is generally hard, let alone the question of which rewards make learning fast. In this work, we introduce a family of a reward structures we call Tiered Reward that resolves both of these questions. We consider the reward-design problem in tasks formulated as reaching desirable states and avoiding undesirable states. To start, we propose a strict partial ordering of the policy space to resolve trade-offs in behavior preference. We prefer policies that reach the good states faster and with higher probability while avoiding the bad states longer. Next, we introduce Tiered Reward, a class of environment-independent reward functions and show it is guaranteed to induce policies that are Pareto-optimal according to our preference relation. Finally, we demonstrate that Tiered Reward leads to fast learning with multiple tabular and deep reinforcement-learning algorithms.


Citation Information:

Zhiyuan Zhou, Shreyas Sundara Raman, Henry Sowerby, and Michael Littman. "Tiered Reward: Designing Rewards for Specification and Fast Learning of Desired Behavior." Reinforcement Learning Journal, vol. 3, 2024, pp. 1265–1288.

BibTeX:

@article{zhou2024tiered,
    title={Tiered Reward: {D}esigning Rewards for Specification and Fast Learning of Desired Behavior},
    author={Zhou, Zhiyuan and Raman, Shreyas Sundara and Sowerby, Henry and Littman, Michael},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1265--1288},
    year={2024}
}