Thompson Sampling with Information Relaxation Penalties

成果类型:
Article
署名作者:
Min, Seungki; Maglaras, Costis; Moallemi, Ciamac C.
署名单位:
Korea Advanced Institute of Science & Technology (KAIST); Columbia University
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.2020.01396
发表日期:
2025
关键词:
dynamic programming: Bayesian dynamic programming: Markov dynamic programming: optimal control
摘要:
We consider a finite -horizon multiarmed bandit (MAB) problem in a Bayesian setting, for which we propose an information relaxation sampling framework. With this framework, we define an intuitive family of control policies that include Thompson sampling (TS) and the Bayesian optimal policy as endpoints. Analogous to TS, which at each decision epoch pulls an arm that is best with respect to the randomly sampled parameters, our algorithms sample entire future reward realizations and take the corresponding best action. However, this is done in the presence of penalties that seek to compensate for the availability of future information. We develop several novel policies and performance bounds for MAB problems that vary in terms of improving performance and increasing computational complexity between the two endpoints. Our policies can be viewed as natural generalizations of TS that simultaneously incorporate knowledge of the time horizon and explicitly consider the exploration -exploitation trade-off. We prove associated structural results on performance bounds and suboptimality gaps. Numerical experiments suggest that this new class of policies perform well, in particular, in settings where the finite time horizon introduces significant exploration -exploitation tension into the problem. Finally, inspired by the finite -horizon Gittins index, we propose an index policy that builds on our framework that particularly outperforms the state-of-the-art algorithms in our numerical experiments.
来源URL: