Value Internalization: Learning and Generalizing from Social Reward

By Frieda Rong, and Max Kleiman-Weiner

Reinforcement Learning Journal, vol. 3, 2024, pp. 1060–1071.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

Social rewards shape human behavior. During development, a caregiver guides a learner’s behavior towards culturally aligned goals and values. How do these behaviors persist and generalize when the caregiver is no longer present, and the learner must continue autonomously? Here, we propose a model of value internalization where social feedback trains an internal social reward (ISR) model that generates internal rewards when social rewards are unavailable. Through empirical simulations, we show that an ISR model prevents agents from unlearning socialized behaviors and enables generalization in out-of-distribution tasks. We characterize the implications of incomplete internalization, akin to ""reward hacking"" on the ISR. Additionally, we show that our model internalizes prosocial behavior in a multi-agent environment. Our work provides a foundation for understanding how humans acquire and generalize values and offers insights for aligning AI with human values.


Citation Information:

Frieda Rong and Max Kleiman-Weiner. "Value Internalization: Learning and Generalizing from Social Reward." Reinforcement Learning Journal, vol. 3, 2024, pp. 1060–1071.

BibTeX:

@article{rong2024value,
    title={Value Internalization: {L}earning and Generalizing from Social Reward},
    author={Rong, Frieda and Kleiman-Weiner, Max},
    journal={Reinforcement Learning Journal},
    volume={3},
    pages={1060--1071},
    year={2024}
}