Revisiting Sparse Rewards for Goal-Reaching Reinforcement Learning

By Gautham Vasan, Yan Wang, Fahim Shahriar, James Bergstra, Martin Jägersand, and A. Rupam Mahmood

Reinforcement Learning Journal, vol. 4, 2024, pp. 1841–1854.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Many real-world robot learning problems, such as pick-and-place or arriving at a destination, can be seen as a problem of reaching a goal state as soon as possible. These problems, when formulated as episodic reinforcement learning tasks, can easily be specified to align well with our intended goal: -1 reward every time step with termination upon reaching the goal state (termed $\textit{minimum-time}$ tasks). Despite this simplicity, such formulations are often overlooked in favor of dense rewards due to their perceived difficulty and lack of informativeness. Our studies contrast the two reward paradigms, revealing that the minimum-time task specification not only facilitates learning higher-quality policies but can also surpass dense-reward-based policies on their own performance metrics. Crucially, we also identify the goal-hit rate of the initial policy as a robust early indicator for learning success in such sparse feedback settings. Finally, using four distinct real-robotic platforms, we show that it is possible to learn pixel-based policies from scratch within two to three hours using constant negative rewards. Our video demo can be found here: https://youtu.be/a6zlVUuKzBc


Citation Information:

Gautham Vasan, Yan Wang, Fahim Shahriar, James Bergstra, Martin Jägersand, and A. Rupam Mahmood. "Revisiting Sparse Rewards for Goal-Reaching Reinforcement Learning." Reinforcement Learning Journal, vol. 4, 2024, pp. 1841–1854.

BibTeX:

@article{vasan2024revisiting,
    title={Revisiting Sparse Rewards for Goal-Reaching Reinforcement Learning},
    author={Vasan, Gautham and Wang, Yan and Shahriar, Fahim and Bergstra, James and J{\"{a}}gersand, Martin and Mahmood, A. Rupam},
    journal={Reinforcement Learning Journal},
    volume={4},
    pages={1841--1854},
    year={2024}
}