Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation

By Yixuan Zhang, and Qiaomin Xie

Reinforcement Learning Journal, vol. 3, 2024, pp. 1168–1210.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Stochastic Approximation (SA) is a widely used algorithmic approach in various fields, including optimization and reinforcement learning (RL). Among RL algorithms, Q-learning is particularly popular due to its empirical success. In this paper, we study asynchronous Q-learning with constant stepsize, which is commonly used in practice for its fast convergence. By connecting the constant stepsize Q-learning to a time-homogeneous Markov chain, we show the distributional convergence of the iterates in Wasserstein distance and establish its exponential convergence rate. We also establish a Central Limit Theory for Q-learning iterates, demonstrating the asymptotic normality of the averaged iterates. Moreover, we provide an explicit expansion of the asymptotic bias of the averaged iterate in stepsize. Specifically, the bias is proportional to the stepsize up to higher-order terms and we provide an explicit expression for the linear coefficient. This precise characterization of the bias allows the application of Richardson-Romberg (RR) extrapolation technique to construct a new estimate that is provably closer to the optimal Q function. Numerical results corroborate our theoretical finding on the improvement of the RR extrapolation method.


Citation Information:

Yixuan Zhang and Qiaomin Xie. "Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation." Reinforcement Learning Journal, vol. 3, 2024, pp. 1168–1210.

BibTeX:

@article{zhang2024constant,
    title={Constant Stepsize {Q-learning}: {D}istributional Convergence, Bias and Extrapolation},
    author={Zhang, Yixuan and Xie, Qiaomin},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1168--1210},
    year={2024}
}