Distributional Off-Policy Evaluation in Reinforcement Learning

成果类型:
Article; Early Access
署名作者:
Qi, Zhengling; Bai, Chenjia; Wang, Zhaoran; Wang, Lan
署名单位:
George Washington University; Harbin Institute of Technology; Northwestern University; University of Miami; China Telecom Corp. Ltd.
刊物名称:
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
ISSN/ISSBN:
0162-1459
DOI:
10.1080/01621459.2025.2506197
发表日期:
2025
关键词:
utility rates
摘要:
In the literature of reinforcement learning (RL), off-policy evaluation is mainly focused on estimating a value of a target policy given the pre-collected data generated by some behavior policy. Motivated by the recent success of distributional RL in many practical applications, we study the distributional off-policy evaluation problem in the batch setting when the reward is multi-variate. We propose an offline Wasserstein-based approach to simultaneously estimate the joint distribution of a multivariate discounted cumulative reward given any initial state-action pair in the setting of an infinite-horizon Markov decision process. Finite sample error bound for the proposed estimator with respect to a modified Wasserstein metric is established in terms of both the number of trajectories and the number of decision points on each trajectory in the batch data. Extensive numerical studies are conducted to demonstrate the superior performance of our proposed method. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.