Reinforcement Learning Journal, vol. 1, 2024, pp. 92–107.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
In search of a simple baseline for Deep Reinforcement Learning in locomotion tasks, we propose a model-free open-loop strategy. By leveraging prior knowledge and the elegance of simple oscillators to generate periodic joint motions, it achieves respectable performance in five different locomotion environments, with a number of tunable parameters that is a tiny fraction of the thousands typically required by DRL algorithms. We conduct two additional experiments using open-loop oscillators to identify current shortcomings of these algorithms. Our results show that, compared to the baseline, DRL is more prone to performance degradation when exposed to sensor noise or failure. Furthermore, we demonstrate a successful transfer from simulation to reality using an elastic quadruped, where RL fails without randomization or reward engineering. Overall, the proposed baseline and associated experiments highlight the existing limitations of DRL for robotic applications, provide insights on how to address them, and encourage reflection on the costs of complexity and generality.
Antonin Raffin, Olivier Sigaud, Jens Kober, Alin Albu-Schaeffer, João Silvério, and Freek Stulp. "An Open-Loop Baseline for Reinforcement Learning Locomotion Tasks." Reinforcement Learning Journal, vol. 1, 2024, pp. 92–107.
BibTeX:@article{raffin2024open,
title={An Open-Loop Baseline for Reinforcement Learning Locomotion Tasks},
author={Raffin, Antonin and Sigaud, Olivier and Kober, Jens and Albu-Schaeffer, Alin and Silv{\'{e}}rio, Jo{\~{a}}o and Stulp, Freek},
journal={Reinforcement Learning Journal},
volume={1},
pages={92--107},
year={2024}
}