Causal Contextual Bandits with Adaptive Context

By Rahul Madhavan, Aurghya Maiti, Gaurav Sinha, and Siddharth Barman

Reinforcement Learning Journal, vol. 5, 2024, pp. 2233–2263.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

We study a variant of causal contextual bandits where the context is chosen based on an initial intervention chosen by the learner. At the beginning of each round, the learner selects an initial action, depending on which a stochastic context is revealed by the environment. Following this, the learner then selects a final action and receives a reward. Given $T$ rounds of interactions with the environment, the objective of the learner is to learn a policy (of selecting the initial and the final action) with maximum expected reward. In this paper we study the specific situation where every action corresponds to intervening on a node in some known causal graph. We extend prior work from the deterministic context setting to obtain simple regret minimization guarantees. This is achieved through an instance-dependent causal parameter, $\lambda$, which characterizes our upper bound. Furthermore, we prove that our simple regret is essentially tight for a large class of instances. A key feature of our work is that we use convex optimization to address the bandit exploration problem. We also conduct experiments to validate our theoretical results, and release our code at [github.com/adaptiveContextualCausalBandits/aCCB](https://github.com/adaptiveContextualCausalBandits/aCCB).


Citation Information:

Rahul Madhavan, Aurghya Maiti, Gaurav Sinha, and Siddharth Barman. "Causal Contextual Bandits with Adaptive Context." Reinforcement Learning Journal, vol. 5, 2024, pp. 2233–2263.

BibTeX:

@article{madhavan2024causal,
    title={Causal Contextual Bandits with Adaptive Context},
    author={Madhavan, Rahul and Maiti, Aurghya and Sinha, Gaurav and Barman, Siddharth},
    journal={Reinforcement Learning Journal},
    volume={5},
    pages={2233--2263},
    year={2024}
}