Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.
Offline reinforcement learning (RL) aims to learn an optimal policy from a static dataset, making it particularly valuable in scenarios where data collection is costly, such as robotics. A major challenge in offline RL is distributional shift, where the learned policy deviates from the dataset distribution, potentially leading to unreliable out-of-distribution actions. To mitigate this issue, regularization techniques have been employed. While many existing methods utilize density ratio-based measures, such as the $f$-divergence, for regularization, we propose an approach that utilizes the Wasserstein distance, which is robust to out-of-distribution data and captures the similarity between actions. Our method employs input-convex neural networks (ICNNs) to model optimal transport maps, enabling the computation of the Wasserstein distance in a discriminator-free manner, thereby avoiding adversarial training and ensuring stable learning. Our approach demonstrates comparable or superior performance to widely used existing methods on the D4RL benchmark dataset. The code is available at [https://github.com/motokiomura/Q-DOT](url).
Motoki Omura, Yusuke Mukuta, Kazuki Ota, Takayuki Osa, and Tatsuya Harada. "Offline Reinforcement Learning with Wasserstein Regularization via Optimal Transport Maps." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
BibTeX:@article{omura2025offline,
title={Offline Reinforcement Learning with {Wasserstein} Regularization via Optimal Transport Maps},
author={Omura, Motoki and Mukuta, Yusuke and Ota, Kazuki and Osa, Takayuki and Harada, Tatsuya},
journal={Reinforcement Learning Journal},
year={2025}
}