Uncertainty Quantification and Exploration for Reinforcement Learning

成果类型:
Article
署名作者:
Zhu, Yi; Dong, Jing; Lam, Henry
署名单位:
Northwestern University; Columbia University; Columbia University
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2023.2436
发表日期:
2024
页码:
1689-1709
关键词:
simulation experiments solution quality monte-carlo allocation selection bounds policies ranking MODEL
摘要:
We investigate statistical uncertainty quantification for reinforcement learning (RL) and its implications in exploration policy. Despite ever-growing literature on RL applications, fundamental questions about inference and error quantification, such as large-sample behaviors, appear to remain quite open. In this paper, we fill in the literature gap by studying the central limit theorem behaviors of estimated Q-values and value functions under various RL settings. In particular, we explicitly identify closed-form expressions of the asymptotic variances, which allow us to efficiently construct asymptotically valid confidence regions for key RL quantities. Furthermore, we utilize these asymptotic expressions to design an effective exploration strategy, which we call Q-value-based Optimal Computing Budget Allocation (QOCBA). The policy relies on maximizing the relative discrepancies among the Q-value estimates. Numerical experiments show superior performances of our exploration strategy than other benchmark
来源URL: