Markov Decision Processes with Arbitrary Reward Processes

成果类型:
Article
署名作者:
Yu, Jia Yuan; Mannor, Shie; Shimkin, Nahum
署名单位:
McGill University; Technion Israel Institute of Technology
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.1090.0397
发表日期:
2009
页码:
737-757
关键词:
Regret minimization
摘要:
We consider a learning problem where the decision maker interacts with a standard Markov decision process, with the exception that the reward functions vary arbitrarily over time. We show that, against every possible realization of the reward process, the agent can perform as well-in hindsight-as every stationary policy. This generalizes the classical no-regret result for repeated games. Specifically, we present an efficient online algorithm-in the spirit of reinforcement learning-that ensures that the agent's average performance loss vanishes over time, provided that the environment is oblivious to the agent's actions. Moreover, it is possible to modify the basic algorithm to cope with instances where reward observations are limited to the agent's trajectory. We present further modi. cations that reduce the computational cost by using function approximation and that track the optimal policy through infrequent changes.
来源URL: