On the consistency of hyper-parameter selection in value-based deep reinforcement learning

By Johan Samir Obando Ceron, João Guilherme Madeira Araújo, Aaron Courville, and Pablo Samuel Castro

Reinforcement Learning Journal, vol. 3, 2024, pp. 1037–1059.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Deep reinforcement learning (deep RL) has achieved tremendous success on various domains through a combination of algorithmic design and careful selection of hyper-parameters. Algorithmic improvements are often the result of iterative enhancements built upon prior approaches, while hyper-parameter choices are typically inherited from previous methods or fine-tuned specifically for the proposed technique. Despite their crucial impact on performance, hyper-parameter choices are frequently overshadowed by algorithmic advancements. This paper conducts an extensive empirical study focusing on the reliability of hyper-parameter selection for value-based deep reinforcement learning agents. Our findings not only help establish which hyper-parameters are most critical to tune, but also help clarify which tunings remain consistent across different training regimes.


Citation Information:

Johan Samir Obando Ceron, João Guilherme Madeira Araújo, Aaron Courville, and Pablo Samuel Castro. "On the consistency of hyper-parameter selection in value-based deep reinforcement learning." Reinforcement Learning Journal, vol. 3, 2024, pp. 1037–1059.

BibTeX:

@article{ceron2024consistency,
    title={On the consistency of hyper-parameter selection in value-based deep reinforcement learning},
    author={Ceron, Johan Samir Obando and Ara{\'{u}}jo, Jo{\~{a}}o Guilherme Madeira and Courville, Aaron and Castro, Pablo Samuel},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1037--1059},
    year={2024}
}