The empirical Bayes envelope and regret minimization in competitive Markov decision processes
成果类型:
Article
署名作者:
Mannor, S; Shimkin, N
署名单位:
Massachusetts Institute of Technology (MIT); Technion Israel Institute of Technology
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.28.2.327.14483
发表日期:
2003
页码:
327-345
关键词:
prediction
strategies
game
摘要:
This paper proposes an extension of the regret minimizing framework from repeated matrix games to stochastic game models, under appropriate recurrence conditions. A decision maker, PI, who wishes to maximize his long-term average reward is facing a Markovian environment, which may also be affected by arbitrary actions of other agents. The latter are collectively modeled as a second player, P2, whose strategy is arbitrary. Both states and actions are fully observed by both players. While PI may obviously secure the min-max value of the game, he may wish to improve on that when the opponent is not playing a worst-case strategy. For repeated matrix games, an achievable goal is presented by the Bayes envelope, that traces PI's best-response payoff against the observable frequencies of P2's actions. We propose a generalization to the stochastic game framework, under recurrence conditions that amount to fixed-state reachability. The empirical Bayes envelope (EBE) is defined as PI's best-response payoff against the stationary strategies of P2 that agree with the observed state-action frequencies. Because the EBE may not be attainable in general, we consider its lower convex hull, the convex Bayes envelope (CBE), which is proved to be achievable by P1. The analysis relies on Blackwell's approachability theory. The CBE is lower bounded by the value of the game and for irreducible games turns out to be strictly above the value whenever P2's frequencies deviate from a worst-case strategy. In the special case of single-controller games where P2 alone affects the state transitions, the EBE itself is shown to be attainable.
来源URL: