Reinforcement Learning Journal, vol. 5, 2024, pp. 2320–2344.
Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.
A central challenge for autonomous vehicles is coordinating with humans. Therefore, incorporating realistic human agents is essential for scalable training and evaluation of autonomous driving systems in simulation. Simulation agents are typically developed by imitating large-scale, high-quality datasets of human driving. However, pure imitation learning agents empirically have high collision rates when executed in a multi-agent closed-loop setting. To build agents that are realistic and effective in closed-loop settings, we propose Human-Regularized PPO (HR-PPO), a multi-agent algorithm where agents are trained through self-play with a small penalty for deviating from a human reference policy. In contrast to prior work, our approach is RL-first and only uses 30 minutes of imperfect human demonstrations. We evaluate agents in a large set of multi-agent traffic scenes. Results show our HR-PPO agents are highly effective in achieving goals, with a success rate of 93%, an off-road rate of 3.5 %, and a collision rate of 3 %. At the same time, the agents drive in a human-like manner, as measured by their similarity to existing human driving logs. We also find that HR-PPO agents show considerable improvements on proxy measures for coordination with human driving, particularly in highly interactive scenarios. We open-source our code and trained agents at https://github.com/Emerge-Lab/nocturne_lab and share demonstrations of agent behaviors at https://sites.google.com/view/driving-partners.
Daphne Cornelisse and Eugene Vinitsky. "Human-compatible driving agents through data-regularized self-play reinforcement learning." Reinforcement Learning Journal, vol. 5, 2024, pp. 2320–2344.
BibTeX:@article{cornelisse2024human,
title={Human-compatible driving agents through data-regularized self-play reinforcement learning},
author={Cornelisse, Daphne and Vinitsky, Eugene},
journal={Reinforcement Learning Journal},
volume={5},
pages={2320--2344},
year={2024}
}