Learning Abstract World Models for Value-preserving Planning with Options

By Rafael Rodriguez-Sanchez, and George Konidaris

Reinforcement Learning Journal, vol. 4, 2024, pp. 1733–1758.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

General-purpose agents require fine-grained controls and rich sensory inputs to perform a wide range of tasks. However, this complexity often leads to intractable decision-making. Traditionally, agents are provided with task-specific action and observation spaces to mitigate this challenge, but this reduces autonomy. Instead, agents must be capable of building state-action spaces at the correct abstraction level from their sensorimotor experiences. We leverage the structure of a given set of temporally extended actions to learn abstract Markov decision processes (MDPs) that operate at a higher level of temporal and state granularity. We characterize state abstractions necessary to ensure that planning with these skills, by simulating trajectories in the abstract MDP, results in policies with bounded value loss in the original MDP. We evaluate our approach in goal-based navigation environments that require continuous abstract states to plan successfully and show that abstract model learning improves the sample efficiency of planning and learning.


Citation Information:

Rafael Rodriguez-Sanchez and George Konidaris. "Learning Abstract World Models for Value-preserving Planning with Options." Reinforcement Learning Journal, vol. 4, 2024, pp. 1733–1758.

BibTeX:

@article{rodriguez-sanchez2024learning,
    title={Learning Abstract World Models for Value-preserving Planning with Options},
    author={Rodriguez-Sanchez, Rafael and Konidaris, George},
    journal={Reinforcement Learning Journal},
    volume={4},
    pages={1733--1758},
    year={2024}
}