Reinforcement Learning Journal, vol. 5, 2024, pp. 2162–2177.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
Deep Reinforcement Learning (DRL) has become popular due to promising results in chatbot, healthcare, and autonomous driving applications. However, few DRL algorithms are rigorously evaluated in terms of their space or time efficiency, making them difficult to develop and deploy in practice. In current literature, existing performance comparisons mostly focus on inference accuracy, without considering real-world limitations such as maximum runtime and memory. Furthermore, many works do not make their code publicly accessible for others to use. This paper addresses this gap by presenting the most comprehensive resource usage evaluation and performance comparison of DRL algorithms known to date. This work focuses on publicly-accessible discrete model-free DRL algorithms because of their practicality in real-world problems where efficient implementations are necessary. Although there are other state-of-the art algorithms, few were presently deployment-ready for training on a large number of environments. In total, sixteen DRL algorithms were trained in 23 different environments (468 seeds total), which collectively required 256 GB and 830 CPU days to run all experiments and 1.8 GB to store all models. Overall, our results validate several known challenges in DRL, including exploration and memory inefficiencies, the classic exploration-exploitation trade-off, and large resource utilizations. To address these challenges, this paper suggests numerous opportunities for future work to help improve the capabilities of modern algorithms. The findings of this paper are intended to aid researchers and practitioners in improving and employing DRL algorithms in time-sensitive and resource-constrained applications such as economics, cybersecurity, robotics, and the Internet of Things.
Olivia P Dizon-Paradis, Stephen E Wormald, Daniel E Capecci, Avanti Bhandarkar, and Damon L Woodard. "Resource Usage Evaluation of Discrete Model-Free Deep Reinforcement Learning Algorithms." Reinforcement Learning Journal, vol. 5, 2024, pp. 2162–2177.
BibTeX:@article{dizon-paradis2024resource,
title={Resource Usage Evaluation of Discrete Model-Free Deep Reinforcement Learning Algorithms},
author={Dizon-Paradis, Olivia P. and Wormald, Stephen E. and Capecci, Daniel E. and Bhandarkar, Avanti and Woodard, Damon L.},
journal={Reinforcement Learning Journal},
volume={5},
pages={2162--2177},
year={2024}
}