Towards Large Language Models that Benefit for All: Benchmarking Group Fairness in Reward Models

By Kefan Song, Jin Yao, Runnan Jiang, Rohan Chandra, and Shangtong Zhang

Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

Presented at the Reinforcement Learning Conference (RLC), Edmonton, Alberta, Canada, August 5–9, 2025.


Download:

Abstract:

As Large Language Models (LLMs) become increasingly powerful and accessible to human users, ensuring fairness across diverse demographic groups, i.e., group fairness, is a critical ethical concern. However, current fairness and bias research in LLMs is limited in two aspects. First, compared to traditional group fairness in machine learning classification, it requires that the non-sensitive attributes, in this case, the question in the user prompts, be the same across different groups. In many practical scenarios, different groups, however, may prefer different questions and this requirement becomes impractical. Second, it evaluates group fairness only for the LLM's final output without identifying the source of possible bias. Namely, the bias in LLM's output can result from both the pretraining and the finetuning. For finetuning, the bias can result from both the RLHF procedure and the learned reward model. Arguably, evaluating the group fairness of each component in the LLM pipeline could help develop better methods to mitigate the possible bias. Recognizing those two limitations, this work benchmarks the group fairness of learned reward models. By using expert-written text from arXiv, we are able to benchmark the group fairness of reward models without requiring the same question in the user prompts across different demographic groups. Surprisingly, our results demonstrate that all the evaluated reward models (e.g., Nemotron-4-340B-Reward, ArmoRM-Llama3-8B-v0.1, and GRM-llama3-8B-sftreg) exhibit statistically significant group unfairness. We also observed that top-performing reward models (w.r.t. canonical performance metrics) tend to demonstrate better group fairness.


Citation Information:

Kefan Song, Jin Yao, Runnan Jiang, Rohan Chandra, and Shangtong Zhang. "Towards Large Language Models that Benefit for All: Benchmarking Group Fairness in Reward Models." Reinforcement Learning Journal, vol. TBD, 2025, pp. TBD.

BibTeX:
@article{song2025towards,
    title={Towards Large Language Models that Benefit for All: {B}enchmarking Group Fairness in Reward Models},
    author={Song, Kefan and Yao, Jin and Jiang, Runnan and Chandra, Rohan and Zhang, Shangtong},
    journal={Reinforcement Learning Journal},
    year={2025}
}