Bayesian Incentive-Compatible Bandit Exploration

成果类型:
Article
署名作者:
Mansour, Yishay; Slivkins, Aleksandrs; Syrgkanis, Vasilis
署名单位:
Tel Aviv University; Microsoft; Microsoft
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2019.1949
发表日期:
2020
页码:
1132-1161
关键词:
Mechanism design multiarmed bandits regret Bayesian incentive-compatibility
摘要:
As self-interested individuals (agents) make decisions over time, they utilize information revealed by other agents in the past and produce information that may help agents in the future. This phenomenon is common in a wide range of scenarios in the Internet economy, as well as in medical decisions. Each agent would like to exploit: select the best action given the current information, but would prefer the previous agents to explore: try out various alternatives to collect information. A social planner, by means of a carefully designed recommendation policy, can incentivize the agents to balance the exploration and exploitation so as to maximize social welfare. We model the planner's recommendation policy as a multiarm bandit algorithm under incentive-compatibility constraints induced by agents' Bayesian priors. We design a bandit algorithm which is incentive-compatible and has asymptotically optimal performance, as expressed by regret. Further, we provide a black-box reduction from an arbitrary multiarm bandit algorithm to an incentive-compatible one, with only a constant multiplicative increase in regret. This reduction works for very general bandit settings that incorporate contexts and arbitrary partial feedback.
来源URL: