Reward Centering

By Abhishek Naik, Yi Wan, Manan Tomar, and Richard S. Sutton

Reinforcement Learning Journal, vol. 4, 2024, pp. 1995–2016.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

We show that discounted methods for solving continuing reinforcement learning problems can perform significantly better if they center their rewards by subtracting out the rewards' empirical average. The improvement is substantial at commonly used discount factors and increases further as the discount factor approaches one. In addition, we show that if a _problem's_ rewards are shifted by a constant, then standard methods perform much worse, whereas methods with reward centering are unaffected. Estimating the average reward is straightforward in the on-policy setting; we propose a slightly more sophisticated method for the off-policy setting. Reward centering is a general idea, so we expect almost every reinforcement-learning algorithm to benefit by the addition of reward centering.


Citation Information:

Abhishek Naik, Yi Wan, Manan Tomar, and Richard S Sutton. "Reward Centering." Reinforcement Learning Journal, vol. 4, 2024, pp. 1995–2016.

BibTeX:

@article{naik2024reward,
    title={Reward Centering},
    author={Naik, Abhishek and Wan, Yi and Tomar, Manan and Sutton, Richard S.},
    journal={Reinforcement Learning Journal},
    volume={4},
    pages={1995--2016},
    year={2024}
}