Improving Thompson Sampling via Information Relaxation for Budgeted Multi-armed Bandits

By Woojin Jeong, and Seungki Min

Reinforcement Learning Journal, vol. 1, 2024, pp. 16–28.

Presented at the Reinforcement Learning Conference (RLC), Amherst Massachusetts, August 9–12, 2024.


Download:

Abstract:

We consider a Bayesian budgeted multi-armed bandit problem, in which each arm consumes a different amount of resources when selected and there is a budget constraint on the total amount of resources that can be used. Budgeted Thompson Sampling (BTS) offers a very effective heuristic to this problem, but its arm-selection rule does not take into account the remaining budget information. We adopt \textit{Information Relaxation Sampling} framework that generalizes Thompson Sampling for classical $K$-armed bandit problems, and propose a series of algorithms that are randomized like BTS but more carefully optimize their decisions with respect to the budget constraint. In a one-to-one correspondence with these algorithms, a series of performance benchmarks that improve the conventional benchmark are also suggested. Our theoretical analysis and simulation results show that our algorithms (and our benchmarks) make incremental improvements over BTS (respectively, the conventional benchmark) across various settings including a real-world example.


Citation Information:

Woojin Jeong and Seungki Min. "Improving Thompson Sampling via Information Relaxation for Budgeted Multi-armed Bandits." Reinforcement Learning Journal, vol. 1, 2024, pp. 16–28.

BibTeX:

@article{jeong2024improving,
    title={Improving {Thompson} Sampling via Information Relaxation for Budgeted Multi-armed Bandits},
    author={Jeong, Woojin and Min, Seungki},
    journal={Reinforcement Learning Journal},
    volume={1},
    pages={16--28},
    year={2024}
}