A Tighter Convergence Proof of Reverse Experience Replay

By Nan Jiang, Jinzhao Li, and Yexiang Xue

Reinforcement Learning Journal, vol. 1, no. 1, 2024, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

In reinforcement learning, Reverse Experience Replay (RER) is a recently proposed algorithm that attains better sample complexity than the classic experience replay method. RER requires the learning algorithm to update the parameters through consecutive state-action-reward tuples in reverse order. However, the most recent theoretical analysis only holds for a minimal learning rate and short consecutive steps, which converge slower than those large learning rate algorithms without RER. In view of this theoretical and empirical gap, we provide a tighter analysis that mitigate the limitation on the learning rate and the length of consecutive steps. Furthermore, we show theoretically that RER converges with a larger learning rate and a longer sequence.


Citation Information:

Nan Jiang, Jinzhao Li, and Yexiang Xue. "A Tighter Convergence Proof of Reverse Experience Replay." Reinforcement Learning Journal, vol. 1, no. 1, 2024, pp. TBD.

BibTeX:

Note: Manually check this automatically generated text (particularly capitalization in the title and first-last splits of names).

@article{jiang2024tighter,
    title={A Tighter Convergence Proof of Reverse Experience Replay},
    author={Jiang, Nan and Li, Jinzhao and Xue, Yexiang},
    journal={Reinforcement Learning Journal},
    volume={1},
    issue={1},
    year={2024}
}