Reliable Off-Policy Evaluation for Reinforcement Learning

成果类型:
Article
署名作者:
Wang, Jie; Gao, Rui; Zha, Hongyuan
署名单位:
The Chinese University of Hong Kong, Shenzhen; University of Texas System; University of Texas Austin; The Chinese University of Hong Kong, Shenzhen; Shenzhen Institute of Artificial Intelligence & Robotics for Society
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2022.2382
发表日期:
2024
关键词:
摘要:
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy using logged trajectory data generated from different behavior policy, without execution of the target policy. Reinforcement learning in high-stake environments, such as healthcare and education, is often limited to off-policy settings due to safety or ethical concerns or inability of exploration. Hence, it is imperative to quantify the uncertainty of the off-policy estimate before deployment of the target policy. In this paper, we propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged trajectories data. Leveraging methodologies from distributionally robust optimization, we show that with proper selection of the size of the distributional uncertainty set, these estimates serve as confidence bounds with nonasymptotic and asymptotic guarantees under stochastic or adversarial environments. Our results are also generalized to batch reinforcement learning and are supported by empirical analysis.