Contextual Bandits with Cross-Learning

成果类型:
Article
署名作者:
Balseiro, Santiago; Golrezaei, Negin; Mahdian, Mohammad; Mirrokni, Vahab; Schneider, Jon
署名单位:
Columbia University; Massachusetts Institute of Technology (MIT); Alphabet Inc.; Google Incorporated
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2022.1313
发表日期:
2023
页码:
1607-1629
关键词:
摘要:
In the classic contextual bandits problem, in each round t, a learner observes some context c, chooses some action i to perform, and receives some reward r(i,t)(c). We consider the variant of this problem in which in addition to receiving the reward r(i,t)(c), the learner also learns the values of r(i,t)(c ') for some other contexts c ' in set Oi(c), that is, the rewards that would be achieved by performing that action under different contexts c 'is an element of O-i(c). This variant arises in several strategic settings, such as learning how to bid in nontruthful repeated auctions, which has gained a lot of attention lately as many platforms have switched to running first price auctions. We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classic contextual bandits problem achieve O(root CKT) regret against all stationary policies, in which C is the number of contexts, K the number of actions, and T the number of rounds. We design and analyze new algorithms for the contextual bandits problem with cross-learning and show that their regret has better dependence on the number of contexts. Under complete cross-learning in which the rewards for all contexts are learned when choosing an action, that is, set O-i(c) contains all contexts, we show that our algorithms achieve regret O(root KT), removing the dependence on C. For any other cases, that is, under partial cross-learning in which |O-i(c)|
来源URL: