Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.
Representation learning and unsupervised skill discovery remain key challenges for training reinforcement learning agents. We show that the empowerment objective, which measures the maximum number of distinct skills an agent can execute from some representation, enables agents to simultaneously perform both representation learning and unsupervised skill discovery. We provide theoretical analysis that empowerment can help agents learn sufficient statistic representations of observations because the maximum number of distinct skills an agent can execute from a learned representation grows when that representation does not combine multiple observations associated with different sufficient statistics. To jointly learn representations and skills, we use a tighter variational lower bound on mutual information relative to prior work, and we maximize this objective using a new actor-critic architecture. Empirically, we demonstrate that our approach can (i) learn significantly more skills than existing unsupervised skill discovery approaches and (ii) learn a representation suitable for downstream reinforcement learning applications.
Andrew Levy, Alessandro G Allievi, and George Konidaris. "Representation Learning and Skill Discovery with Empowerment." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.
BibTeX:@article{levy2025representation,
title={Representation Learning and Skill Discovery with Empowerment},
author={Levy, Andrew and Allievi, Alessandro G and Konidaris, George},
journal={Reinforcement Learning Journal},
year={2025}
}