Linearly Parameterized Bandits

成果类型:
Article
署名作者:
Rusmevichientong, Paat; Tsitsiklis, John N.
署名单位:
Cornell University; Massachusetts Institute of Technology (MIT)
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.1100.0446
发表日期:
2010
页码:
395-411
关键词:
allocation
摘要:
We consider bandit problems involving a large (possibly infinite) collection of arms, in which the expected reward of each arm is a linear function of an r-dimensional random vector Z is an element of R-r, where r >= 2. The objective is to minimize the cumulative regret and Bayes risk. When the set of arms corresponds to the unit sphere, we prove that the regret and Bayes risk is of order Theta(r root T), by establishing a lower bound for an arbitrary policy, and showing that a matching upper bound is obtained through a policy that alternates between exploration and exploitation phases. The phase-based policy is also shown to be effective if the set of arms satisfies a strong convexity condition. For the case of a general set of arms, we describe a near-optimal policy whose regret and Bayes risk admit upper bounds of the form O(r root T log(3/2) T).
来源URL: