A Simple Mixture Policy Parameterization for Improving Sample Efficiency of CVaR Optimization

By Yudong Luo, Yangchen Pan, Han Wang, Philip Torr, and Pascal Poupart

Reinforcement Learning Journal, vol. 2, 2024, pp. 573–592.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Reinforcement learning algorithms utilizing policy gradients (PG) to optimize Conditional Value at Risk (CVaR) face significant challenges with sample inefficiency, hindering their practical applications. This inefficiency stems from two main facts: a focus on tail-end performance that overlooks many sampled trajectories, and the potential of gradient vanishing when the lower tail of the return distribution is overly flat. To address these challenges, we propose a simple mixture policy parameterization. This method integrates a risk-neutral policy with an adjustable policy to form a risk-averse policy. By employing this strategy, all collected trajectories can be utilized for policy updating, and the issue of vanishing gradients is counteracted by stimulating higher returns through the risk-neutral component, thus lifting the tail and preventing flatness. Our empirical study reveals that this mixture parameterization is uniquely effective across a variety of benchmark domains. Specifically, it excels in identifying risk-averse CVaR policies in some Mujoco environments where the traditional CVaR-PG fails to learn a reasonable policy.


Citation Information:

Yudong Luo, Yangchen Pan, Han Wang, Philip Torr, and Pascal Poupart. "A Simple Mixture Policy Parameterization for Improving Sample Efficiency of CVaR Optimization." Reinforcement Learning Journal, vol. 2, 2024, pp. 573–592.

BibTeX:

@article{luo2024simple,
    title={A Simple Mixture Policy Parameterization for Improving Sample Efficiency of {CVaR} Optimization},
    author={Luo, Yudong and Pan, Yangchen and Wang, Han and Torr, Philip and Poupart, Pascal},
    journal={Reinforcement Learning Journal},
    volume={2},
    pages={573--592},
    year={2024}
}