Inception: Efficiently Computable Misinformation Attacks on Markov Games

By Jeremy McMahan, Young Wu, Yudong Chen, Jerry Zhu, and Qiaomin Xie

Reinforcement Learning Journal, vol. 5, 2024, pp. 2345–2358.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

We study security threats to Markov games due to information asymmetry and misinformation. We consider an attacker player who can spread misinformation about its reward function to influence the robust victim player's behavior. Given a fixed fake reward function, we derive the victim's policy under worst-case rationality and present polynomial-time algorithms to compute the attacker's optimal worst-case policy based on linear programming and backward induction. Then, we provide an efficient inception (""planting an idea in someone's mind"") attack algorithm to find the optimal fake reward function within a restricted set of reward functions with dominant strategies. Importantly, our methods exploit the universal assumption of rationality to compute attacks efficiently. Thus, our work exposes a security vulnerability arising from standard game assumptions under misinformation.


Citation Information:

Jeremy McMahan, Young Wu, Yudong Chen, Jerry Zhu, and Qiaomin Xie. "Inception: Efficiently Computable Misinformation Attacks on Markov Games." Reinforcement Learning Journal, vol. 5, 2024, pp. 2345–2358.

BibTeX:

@article{mcmahan2024inception,
    title={Inception: {E}fficiently Computable Misinformation Attacks on {Markov} Games},
    author={McMahan, Jeremy and Wu, Young and Chen, Yudong and Zhu, Jerry and Xie, Qiaomin},
    journal={Reinforcement Learning Journal},
    volume={5},
    pages={2345--2358},
    year={2024}
}