Learning to Optimize for Reinforcement Learning

By Qingfeng Lan, A. Rupam Mahmood, Shuicheng YAN, and Zhongwen Xu

Reinforcement Learning Journal, vol. 2, 2024, pp. 481–497.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

In recent years, by leveraging more data, computation, and diverse tasks, learned optimizers have achieved remarkable success in supervised learning, outperforming classical hand-designed optimizers. Reinforcement learning (RL) is essentially different from supervised learning, and in practice, these learned optimizers do not work well even in simple RL tasks. We investigate this phenomenon and identify two issues. First, the agent-gradient distribution is non-independent and identically distributed, leading to inefficient meta-training. Moreover, due to highly stochastic agent-environment interactions, the agent-gradients have high bias and variance, which increases the difficulty of learning an optimizer for RL. We propose pipeline training and a novel optimizer structure with a good inductive bias to address these issues, making it possible to learn an optimizer for reinforcement learning from scratch. We show that, although only trained in toy tasks, our learned optimizer can generalize to unseen complex tasks in Brax.


Citation Information:

Qingfeng Lan, A. Rupam Mahmood, Shuicheng YAN, and Zhongwen Xu. "Learning to Optimize for Reinforcement Learning." Reinforcement Learning Journal, vol. 2, 2024, pp. 481–497.

BibTeX:

@article{lan2024learning,
    title={Learning to Optimize for Reinforcement Learning},
    author={Lan, Qingfeng and Mahmood, A. Rupam and YAN, Shuicheng and Xu, Zhongwen},
    journal={Reinforcement Learning Journal},
    volume={2},
    pages={481--497},
    year={2024}
}