Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning

By Nicholas E. Corrado, Yuxiao Qu, John U. Balis, Adam Labiosa, and Josiah P. Hanna

Reinforcement Learning Journal, vol. 1, 2024, pp. 198–215.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

In offline reinforcement learning (RL), an RL agent learns to solve a task using only a fixed dataset of previously collected data. While offline RL has been successful in learning real-world robot control policies, it typically requires large amounts of expert-quality data to learn effective policies that generalize to out-of-distribution states. Unfortunately, such data is often difficult and expensive to acquire in real-world tasks. Several recent works have leveraged data augmentation (DA) to inexpensively generate additional data, but most DA works apply augmentations in a random fashion and ultimately produce highly suboptimal augmented experience. In this work, we propose Guided Data Augmentation (GuDA), a human-guided DA framework that generates expert-quality augmented data. The key insight behind GuDA is that while it may be difficult to demonstrate the sequence of actions required to produce expert data, a user can often easily characterize when an augmented trajectory segment represents progress toward task completion. Thus, a user can restrict the space of possible augmentations to automatically reject suboptimal augmented data. To extract a policy from GuDA, we use off-the-shelf offline reinforcement learning and behavior cloning algorithms. We evaluate GuDA on a physical robot soccer task as well as simulated D4RL navigation tasks, a simulated autonomous driving task, and a simulated soccer task. Empirically, GuDA enables learning given a small initial dataset of potentially suboptimal experience and outperforms a random DA strategy as well as a model-based DA strategy.


Citation Information:

Nicholas E Corrado, Yuxiao Qu, John U Balis, Adam Labiosa, and Josiah P Hanna. "Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning." Reinforcement Learning Journal, vol. 1, 2024, pp. 198–215.

BibTeX:

@article{corrado2024guided,
    title={Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning},
    author={Corrado, Nicholas E. and Qu, Yuxiao and Balis, John U. and Labiosa, Adam and Hanna, Josiah P.},
    journal={Reinforcement Learning Journal},
    volume={1},
    pages={198--215},
    year={2024}
}