Seldonian Reinforcement Learning for Ad Hoc Teamwork

By Edoardo Zorzi, Alberto Castellini, Leonidas Bakopoulos, Georgios Chalkiadakis, and Alessandro Farinelli

Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.


Download:

Abstract:

Most offline RL algorithms return optimal policies but do not provide statistical guarantees on desirable behaviors. This could generate reliability issues in safety-critical applications, such as in some multiagent domains where agents, and possibly humans, need to interact to reach their goals without harming each other. In this work, we propose a novel offline RL approach, inspired by Seldonian optimization, which returns policies with good performance and statistically guaranteed properties with respect to predefined desirable behaviors. In particular, our focus is on Ad Hoc Teamwork settings, where agents must collaborate with new teammates without prior coordination. Our method requires only a pre-collected dataset, a set of candidate policies for our agent, and a specification about the possible policies followed by the other players---it does not require further interactions, training, or assumptions on the type and architecture of the policies. We test our algorithm in Ad Hoc Teamwork problems and show that it consistently finds reliable policies while improving sample efficiency with respect to standard ML baselines.


Citation Information:

Edoardo Zorzi, Alberto Castellini, Leonidas Bakopoulos, Georgios Chalkiadakis, and Alessandro Farinelli. "Seldonian Reinforcement Learning for Ad Hoc Teamwork." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

BibTeX:
@article{zorzi2025seldonian,
    title={Seldonian Reinforcement Learning for Ad Hoc Teamwork},
    author={Zorzi, Edoardo and Castellini, Alberto and Bakopoulos, Leonidas and Chalkiadakis, Georgios and Farinelli, Alessandro},
    journal={Reinforcement Learning Journal},
    year={2025}
}