Weight Clipping for Deep Continual and Reinforcement Learning

By Mohamed Elsayed, Qingfeng Lan, Clare Lyle, and A. Rupam Mahmood

Reinforcement Learning Journal, vol. 5, 2024, pp. 2198–2217.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Many failures in deep continual and reinforcement learning are associated with increasing magnitudes of the weights, making them hard to change and potentially causing overfitting. While many methods address these learning failures, they often change the optimizer or the architecture, a complexity that hinders widespread adoption in various systems. In this paper, we focus on learning failures that are associated with increasing weight norm and we propose a simple technique that can be easily added on top of existing learning systems: clipping neural network weights to limit them to a specific range. We study the effectiveness of weight clipping in a series of supervised and reinforcement learning experiments. Our empirical results highlight the benefits of weight clipping for generalization, addressing loss of plasticity and policy collapse, and facilitating learning with a large replay ratio.


Citation Information:

Mohamed Elsayed, Qingfeng Lan, Clare Lyle, and A. Rupam Mahmood. "Weight Clipping for Deep Continual and Reinforcement Learning." Reinforcement Learning Journal, vol. 5, 2024, pp. 2198–2217.

BibTeX:

@article{elsayed2024weight,
    title={Weight Clipping for Deep Continual and Reinforcement Learning},
    author={Elsayed, Mohamed and Lan, Qingfeng and Lyle, Clare and Mahmood, A. Rupam},
    journal={Reinforcement Learning Journal},
    volume={5},
    pages={2198--2217},
    year={2024}
}