Improvements and Generalizations of Stochastic Knapsack and Markovian Bandits Approximation Algorithms

成果类型:
Article
署名作者:
Ma, Will
署名单位:
Massachusetts Institute of Technology (MIT)
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2017.0884
发表日期:
2018
页码:
789-812
关键词:
Adaptivity BENEFIT lp
摘要:
We study the multi-armed bandit problem with arms which are Markov chains with rewards. In the finite-horizon setting, the celebrated Gittins indices do not apply, and the exact solution is intractable. We provide approximation algorithms for the general model of Markov decision processes with nonunit transition times. When preemption is allowed, we provide a (1/2 - epsilon)-approximation, along with an example showing this is tight. When preemption isn't allowed, we provide a 1/12-approximation, which improves to a 4/27-approximation when transition times are unity. Our model captures the Markovian Bandits model of Gupta et al., the Stochastic Knapsack model of Dean et al., and the Budgeted Learning model of Guha and Munagala. Our algorithms improve existing results in all three areas. In our analysis, we encounter and overcome to our knowledge a new obstacle: an algorithm that provably exists via analytical arguments, but cannot be found in polynomial time.
来源URL: