Efficient evaluation of natural stochastic policies in off-line reinforcement learning
成果类型:
Article
署名作者:
Kallus, Nathan; Uehara, Masatoshi
署名单位:
Cornell University; Cornell University
刊物名称:
BIOMETRIKA
ISSN/ISSBN:
0006-3444
DOI:
10.1093/biomet/asad059
发表日期:
2024
关键词:
dynamic treatment regimes
propensity score
intervention
models
摘要:
We study the efficient off-policy evaluation of natural stochastic policies, which are defined in terms of deviations from the unknown behaviour policy. This is a departure from the literature on off-policy evaluation that largely considers the evaluation of explicitly specified policies. Crucially, off-line reinforcement learning with natural stochastic policies can help alleviate issues of weak overlap, lead to policies that build upon current practice and improve policies' implementability in practice. Compared with the classic case of a prespecified evaluation policy, when evaluating natural stochastic policies, the efficiency bound, which measures the best-achievable estimation error, is inflated since the evaluation policy itself is unknown. In this paper we derive the efficiency bounds of two major types of natural stochastic policies: tilting policies and modified treatment policies. We then propose efficient nonparametric estimators that attain the efficiency bounds under lax conditions and enjoy a partial double robustness property.
来源URL: