Mixture of Experts in a Mixture of RL settings

By Timon Willi, Johan Samir Obando Ceron, Jakob Nicolaus Foerster, Gintare Karolina Dziugaite, and Pablo Samuel Castro

Reinforcement Learning Journal, vol. 3, 2024, pp. 1072–1105.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Mixtures of Experts (MoEs) have gained prominence in (self-)supervised learning due to their enhanced inference efficiency, adaptability to distributed training, and modularity. Previous research has illustrated that MoEs can significantly boost Deep Reinforcement Learning (DRL) performance by expanding the network's parameter count while reducing dormant neurons, thereby enhancing the model's learning capacity and ability to deal with non-stationarity. In this work, we shed more light on MoEs' ability to deal with non-stationarity and investigate MoEs in DRL settings with ``amplified'' non-stationarity via multi-task training, providing further evidence that MoEs improve learning capacity. In contrast to previous work, our multi-task results allow us to better understand the underlying causes for the beneficial effect of MoE in DRL training, the impact of the various MoE components, and insights into how best to incorporate them in actor-critic-based DRL networks. Finally, we also confirm results from previous work.


Citation Information:

Timon Willi, Johan Samir Obando Ceron, Jakob Nicolaus Foerster, Gintare Karolina Dziugaite, and Pablo Samuel Castro. "Mixture of Experts in a Mixture of RL settings." Reinforcement Learning Journal, vol. 3, 2024, pp. 1072–1105.

BibTeX:

@article{willi2024mixture,
    title={Mixture of Experts in a Mixture of {RL} settings},
    author={Willi, Timon and Ceron, Johan Samir Obando and Foerster, Jakob Nicolaus and Dziugaite, Gintare Karolina and Castro, Pablo Samuel},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1072--1105},
    year={2024}
}