Dissecting Deep RL with High Update Ratios: Combatting Value Divergence

By Marcel Hussing, Claas A Voelcker, Igor Gilitschenski, Amir-massoud Farahmand, and Eric Eaton

Reinforcement Learning Journal, vol. 2, 2024, pp. 995–1018.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

We show that deep reinforcement learning algorithms can retain their ability to learn without resetting network parameters in settings where the number of gradient updates greatly exceeds the number of environment samples by combatting value function divergence. Under large update-to-data ratios, a recent study by Nikishin et al. (2022) suggested the emergence of a primacy bias, in which agents overfit early interactions and downplay later experience, impairing their ability to learn. In this work, we investigate the phenomena leading to the primacy bias. We inspect the early stages of training that were conjectured to cause the failure to learn and find that one fundamental challenge is a long-standing acquaintance: value function divergence. Overinflated Q-values are found not only on out-of-distribution but also in-distribution data and can be linked to overestimation on unseen action prediction propelled by optimizer momentum. We employ a simple unit-ball normalization that enables learning under large update ratios, show its efficacy on the widely used dm_control suite, and obtain strong performance on the challenging dog tasks, competitive with model-based approaches. Our results question, in parts, the prior explanation for sub-optimal learning due to overfitting early data.


Citation Information:

Marcel Hussing, Claas A Voelcker, Igor Gilitschenski, Amir-massoud Farahmand, and Eric Eaton. "Dissecting Deep RL with High Update Ratios: Combatting Value Divergence." Reinforcement Learning Journal, vol. 2, 2024, pp. 995–1018.

BibTeX:

@article{hussing2024dissecting,
    title={Dissecting Deep {RL} with High Update Ratios: {C}ombatting Value Divergence},
    author={Hussing, Marcel and Voelcker, Claas A and Gilitschenski, Igor and Farahmand, Amir-massoud and Eaton, Eric},
    journal={Reinforcement Learning Journal},
    volume={2},
    pages={995--1018},
    year={2024}
}