Reinforcement Learning Journal, vol. 5, 2024, pp. 2096–2106.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
Goals fundamentally shape how we experience the world. For example, when we are hungry, we tend to view objects in our environment according to whether or not they are edible (or tasty). Alternatively, when we are cold, we view the very same objects according to their ability to produce heat. Computational theories of learning in cognitive systems, such as reinforcement learning, use state-representations to describe how agents determine behaviorally-relevant features of their environment. However, these approaches typically assume ground-truth state representations that are known to the agent, and reward functions that need to be learned. Here we suggest an alternative approach in which state-representations are not assumed veridical, or even pre-defined, but rather emerge from the agent's goals through interaction with its environment. We illustrate this novel perspective using a rodent odor-guided choice task and discuss its potential role in developing a unified theory of experience based learning in natural and artificial agents.
Nadav Amir, Yael Niv, and Angela J Langdon. "States as goal-directed concepts: an epistemic approach to state-representation learning." Reinforcement Learning Journal, vol. 5, 2024, pp. 2096–2106.
BibTeX:@article{amir2024states,
title={States as goal-directed concepts: an epistemic approach to state-representation learning},
author={Amir, Nadav and Niv, Yael and Langdon, Angela J},
journal={Reinforcement Learning Journal},
volume={5},
pages={2096--2106},
year={2024}
}