Make the Pertinent Salient: Task-Relevant Reconstruction for Visual Control with Distractions

By Kyungmin Kim, JB Lanier, and Roy Fox

Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.


Download:

Abstract:

Model-Based Reinforcement Learning (MBRL) has shown promise in visual control tasks due to its data efficiency. However, training MBRL agents to develop generalizable perception remains challenging, especially amid visual distractions that introduce noise in representation learning. We introduce Segmentation Dreamer (SD), a framework that facilitates representation learning in MBRL by incorporating a novel auxiliary task. Assuming that task-relevant components in images can be easily identified with prior knowledge in a given task, SD uses segmentation masks on image observations to reconstruct only task-relevant regions, reducing representation complexity. SD can leverage either ground-truth masks available in simulation or potentially imperfect segmentation foundation models. The latter is further improved by selectively applying the image reconstruction loss to mitigate misleading learning signals from mask prediction errors. In modified DeepMind Control suite and Meta-World tasks with added visual distractions, SD achieves significantly better sample efficiency and greater final performance than prior work and is especially effective in sparse reward tasks that had been unsolvable by prior work. We also validate its effectiveness in a real-world robotic lane-following task when training with intentional distractions for zero-shot transfer.


Citation Information:

Kyungmin Kim, JB Lanier, and Roy Fox. "Make the Pertinent Salient: Task-Relevant Reconstruction for Visual Control with Distractions." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

BibTeX:
@article{kim2025make,
    title={Make the Pertinent Salient: {T}ask-Relevant Reconstruction for Visual Control with Distractions},
    author={Kim, Kyungmin and Lanier, JB and Fox, Roy},
    journal={Reinforcement Learning Journal},
    year={2025}
}