Combining Automated Optimisation of Hyperparameters and Reward Shape

By Julian Dierkes, Emma Cramer, Holger Hoos, and Sebastian Trimpe

Reinforcement Learning Journal, vol. 3, 2024, pp. 1441–1466.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

There has been significant progress in deep reinforcement learning (RL) in recent years. Nevertheless, finding suitable hyperparameter configurations and reward functions remains challenging even for experts, and performance heavily relies on these design choices. Also, most RL research is conducted on known benchmarks where knowledge about these choices already exists. However, novel practical applications often pose complex tasks for which no prior knowledge about good hyperparameters and reward functions is available, thus necessitating their derivation from scratch. Prior work has examined automatically tuning either hyperparameters or reward functions individually. We demonstrate empirically that an RL algorithm's hyperparameter configurations and reward function are often mutually dependent, meaning neither can be fully optimised without appropriate values for the other. We then propose a methodology for the combined optimisation of hyperparameters and the reward function. Furthermore, we include a variance penalty as an optimisation objective to improve the stability of learned policies. We conducted extensive experiments using Proximal Policy Optimisation and Soft Actor-Critic on four environments. Our results show that combined optimisation significantly improves over baseline performance in half of the environments and achieves competitive performance in the others, with only a minor increase in computational costs. This suggests that combined optimisation should be best practice.


Citation Information:

Julian Dierkes, Emma Cramer, Holger Hoos, and Sebastian Trimpe. "Combining Automated Optimisation of Hyperparameters and Reward Shape." Reinforcement Learning Journal, vol. 3, 2024, pp. 1441–1466.

BibTeX:

@article{dierkes2024combining,
    title={Combining Automated Optimisation of Hyperparameters and Reward Shape},
    author={Dierkes, Julian and Cramer, Emma and Hoos, Holger and Trimpe, Sebastian},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1441--1466},
    year={2024}
}