On the Effect of Regularization in Policy Mirror Descent

By Jan Felix Kleuker, Aske Plaat, and Thomas M. Moerland

Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.


Download:

Abstract:

Policy Mirror Descent (PMD) has emerged as a unifying framework in reinforcement learning (RL) by linking policy gradient methods with a first-order optimization method known as mirror descent. At its core, PMD incorporates two key regularization components: (i) a distance term that enforces a trust region for stable policy updates and (ii) an MDP regularizer that augments the reward function to promote structure and robustness. While PMD has been extensively studied in theory, empirical investigations remain scarce. This work provides a large-scale empirical analysis of the interplay between these two regularization techniques, running over 500k training seeds on small RL environments. Our results demonstrate that, although the two regularizers can partially substitute each other, their precise combination is critical for achieving robust performance. These findings highlight the potential for advancing research on more robust algorithms in RL.


Citation Information:

Jan Felix Kleuker, Aske Plaat, and Thomas M Moerland. "On the Effect of Regularization in Policy Mirror Descent." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

BibTeX:
@article{kleuker2025effect,
    title={On the Effect of Regularization in Policy Mirror Descent},
    author={Kleuker, Jan Felix and Plaat, Aske and Moerland, Thomas M.},
    journal={Reinforcement Learning Journal},
    year={2025}
}