Learning to Navigate in Mazes with Novel Layouts using Abstract Top-down Maps

By Linfeng Zhao, and Lawson L.S. Wong

Reinforcement Learning Journal, vol. 5, 2024, pp. 2359–2372.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Learning navigation capabilities in different environments has long been one of the major challenges in decision-making. In this work, we focus on zero-shot navigation ability using given abstract 2-D top-down maps. Like human navigation by reading a paper map, the agent reads the map as an image when navigating in a novel layout, after learning to navigate on a set of training maps. We propose a model-based reinforcement learning approach for this multi-task learning problem, where it jointly learns a hypermodel that takes top-down maps as input and predicts the weights of the transition network. We use the DeepMind Lab environment and customize layouts using generated maps. Our method can adapt better to novel environments in zero-shot and is more robust to noise.


Citation Information:

Linfeng Zhao and Lawson L.S Wong. "Learning to Navigate in Mazes with Novel Layouts using Abstract Top-down Maps." Reinforcement Learning Journal, vol. 5, 2024, pp. 2359–2372.

BibTeX:

@article{zhao2024learning,
    title={Learning to Navigate in Mazes with Novel Layouts using Abstract Top-down Maps},
    author={Zhao, Linfeng and Wong, Lawson L.S.},
    journal={Reinforcement Learning Journal},
    volume={5},
    pages={2359--2372},
    year={2024}
}