Reinforcement Learning Journal, vol. 4, 2024, pp. 1567–1597.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
We investigate the impact of auxiliary learning tasks such as observation reconstruction and latent self-prediction on the representation learning problem in reinforcement learning. We also study how they interact with distractions and observation functions in the MDP. We provide a theoretical analysis of the learning dynamics of observation reconstruction, latent self-prediction, and TD learning in the presence of distractions and observation functions under linear model assumptions. With this formalization, we are able to explain why latent-self prediction is a helpful auxiliary task, while observation reconstruction can provide more useful features when used in isolation. Our empirical analysis shows that the insights obtained from our learning dynamics framework predicts the behavior of these loss functions beyond the linear model assumption in non-linear neural networks. This reinforces the usefulness of the linear model framework not only for theoretical analysis, but also practical benefit for applied problems.
Claas A Voelcker, Tyler Kastner, Igor Gilitschenski, and Amir-massoud Farahmand. "When does Self-Prediction help? Understanding Auxiliary Tasks in Reinforcement Learning." Reinforcement Learning Journal, vol. 4, 2024, pp. 1567–1597.
BibTeX:@article{voelcker2024when,
title={When does Self-Prediction help? Understanding Auxiliary Tasks in Reinforcement Learning},
author={Voelcker, Claas A and Kastner, Tyler and Gilitschenski, Igor and Farahmand, Amir-massoud},
journal={Reinforcement Learning Journal},
volume={4},
pages={1567--1597},
year={2024}
}