Reinforcement Learning Journal, vol. 5, 2024, pp. 2264–2283.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
Many tasks in control, robotics, and planning can be specified using desired goal configurations for various entities in the environment. Learning goal-conditioned policies is a natural paradigm to solve such tasks. However, current approaches struggle to learn and generalize as task complexity increases, such as variations in number of environment entities or compositions of goals. In this work, we introduce a framework for modeling entity-based compositional structure in tasks, and create suitable policy designs that can leverage this structure. Our policies, which utilize architectures like Deep Sets and Self Attention, are flexible and can be trained end-to-end without requiring any action primitives. When trained using standard reinforcement and imitation learning methods on a suite of simulated robot manipulation tasks, we find that these architectures achieve significantly higher success rates with less data. We also find these architectures enable broader and compositional generalization, producing policies that extrapolate to different numbers of entities than seen in training, and stitch together (i.e. compose) learned skills in novel ways.
Allan Zhou, Vikash Kumar, Chelsea Finn, and Aravind Rajeswaran. "Policy Architectures for Compositional Generalization in Control." Reinforcement Learning Journal, vol. 5, 2024, pp. 2264–2283.
BibTeX:@article{zhou2024policy,
title={Policy Architectures for Compositional Generalization in Control},
author={Zhou, Allan and Kumar, Vikash and Finn, Chelsea and Rajeswaran, Aravind},
journal={Reinforcement Learning Journal},
volume={5},
pages={2264--2283},
year={2024}
}