Stabilizing Extreme Q-learning by Maclaurin Expansion

By Motoki Omura, Takayuki Osa, YUSUKE Mukuta, and Tatsuya Harada

Reinforcement Learning Journal, vol. 3, 2024, pp. 1427–1440.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

In offline reinforcement learning, in-sample learning methods have been widely used to prevent performance degradation caused by evaluating out-of-distribution actions from the dataset. Extreme Q-learning (XQL) employs a loss function based on the assumption that Bellman error follows a Gumbel distribution, enabling it to model the soft optimal value function in an in-sample manner. It has demonstrated strong performance in both offline and online reinforcement learning settings. However, issues remain, such as the instability caused by the exponential term in the loss function and the risk of the error distribution deviating from the Gumbel distribution. Therefore, we propose Maclaurin Expanded Extreme Q-learning to enhance stability. In this method, applying Maclaurin expansion to the loss function in XQL enhances stability against large errors. This approach involves adjusting the modeled value function between the value function under the behavior policy and the soft optimal value function, thus achieving a trade-off between stability and optimality depending on the order of expansion. It also enables adjustment of the error distribution assumption from a normal distribution to a Gumbel distribution. Our method significantly stabilizes learning in online RL tasks from DM Control, where XQL was previously unstable. Additionally, it improves performance in several offline RL tasks from D4RL.


Citation Information:

Motoki Omura, Takayuki Osa, YUSUKE Mukuta, and Tatsuya Harada. "Stabilizing Extreme Q-learning by Maclaurin Expansion." Reinforcement Learning Journal, vol. 3, 2024, pp. 1427–1440.

BibTeX:

@article{omura2024stabilizing,
    title={Stabilizing Extreme {Q-learning} by Maclaurin Expansion},
    author={Omura, Motoki and Osa, Takayuki and Mukuta, YUSUKE and Harada, Tatsuya},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1427--1440},
    year={2024}
}