Online Regret Bounds for Satisficing in Markov Decision Processes

成果类型:
Article; Early Access
署名作者:
Hajiabolhassan, Hossein; Ortner, Ronald
署名单位:
Medical University of Graz; University of Leoben
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2023.0275
发表日期:
2025
关键词:
algorithm
摘要:
We consider general reinforcement learning under the average reward criterion in Markov decision processes (MDPs), when the learner's goal is not to learn an optimal policy, but accepts any policy whose average reward is above a given satisfaction level a. We show that with this more modest objective, it is possible to give algorithms that only have constant regret with respect to the level a, provided that there is a policy above this level. This is a generalization of known results from the bandit setting to MDPs. Further, we present a more general algorithm that achieves the best of both worlds: If the optimal policy has average reward above a, this algorithm has bounded regret with respect to a. On the other hand, if all policies are below a, then the expected regret with respect to the optimal policy is bounded as for the UCRL2 algorithm.
来源URL: