Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.
In reinforcement learning, off-policy actor-critic methods like DDPG and TD3 use deterministic policy gradients: the Q-function is learned from environment data, while the actor maximizes it via gradient ascent. We observe that in complex tasks such as dexterous manipulation and restricted locomotion with mobility constraints, the Q-function exhibits many local optima, making gradient ascent prone to getting stuck. To address this, we introduce SAVO, an actor architecture that (i) generates multiple action proposals and selects the one with the highest Q-value, and (ii) approximates the Q-function repeatedly by truncating poor local optima to guide gradient ascent more effectively. We evaluate tasks such as restricted locomotion, dexterous manipulation, and large discrete-action space recommender systems and show that our actor finds optimal actions more frequently and outperforms alternate actor architectures.
Ayush Jain, Norio Kosaka, Xinhu Li, Kyung-Min Kim, Erdem Biyik, and Joseph J Lim. "Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
BibTeX:@article{jain2025mitigating,
title={Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions},
author={Jain, Ayush and Kosaka, Norio and Li, Xinhu and Kim, Kyung-Min and Biyik, Erdem and Lim, Joseph J},
journal={Reinforcement Learning Journal},
year={2025}
}