Towards Principled, Practical Policy Gradient for Bandits and Tabular MDPs

By Michael Lu, Matin Aghaei, Anant Raj, and Sharan Vaswani

Reinforcement Learning Journal, vol. 1, 2024, pp. 216–282.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

We consider (stochastic) softmax policy gradient (PG) methods for bandits and tabular Markov decision processes (MDPs). While the PG objective is non-concave, recent research has used the objective’s smoothness and gradient domination proper- ties to achieve convergence to an optimal policy. However, these theoretical results require setting the algorithm parameters according to unknown problem-dependent quantities (e.g. the optimal action or the true reward vector in a bandit problem). To address this issue, we borrow ideas from the optimization literature to design practical, principled PG methods in both the exact and stochastic settings. In the exact setting, we employ an Armijo line-search to set the step-size for softmax PG and demonstrate a linear convergence rate. In the stochastic setting, we utilize exponentially decreasing step-sizes, and characterize the convergence rate of the resulting algorithm. We show that the proposed algorithm offers similar theoretical guarantees as the state-of-the art results, but does not require the knowledge of oracle-like quantities. For the multi-armed bandit setting, our techniques result in a theoretically-principled PG algorithm that does not require explicit exploration, the knowledge of the reward gap, the reward distributions, or the noise. Finally, we empirically compare the proposed methods to PG approaches that require oracle knowledge, and demonstrate competitive performance.


Citation Information:

Michael Lu, Matin Aghaei, Anant Raj, and Sharan Vaswani. "Towards Principled, Practical Policy Gradient for Bandits and Tabular MDPs." Reinforcement Learning Journal, vol. 1, 2024, pp. 216–282.

BibTeX:

@article{lu2024towards,
    title={Towards Principled, Practical Policy Gradient for Bandits and Tabular {MDP}s},
    author={Lu, Michael and Aghaei, Matin and Raj, Anant and Vaswani, Sharan},
    journal={Reinforcement Learning Journal},
    volume={1},
    pages={216--282},
    year={2024}
}