OPTIMAL POLICY EVALUATION USING KERNEL-BASED TEMPORAL DIFFERENCE METHODS
成果类型:
Article
署名作者:
Duan, Yaqi; Wang, Mengdi; Wainwright, Martin j.
署名单位:
New York University; Princeton University; Massachusetts Institute of Technology (MIT)
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/24-AOS2399
发表日期:
2024
页码:
1927-1952
关键词:
Bounds
regression
rates
摘要:
We study nonparametric methods for estimating the value function of an infinite-horizon discounted Markov reward process (MRP). We analyze the kernel-based least-squares temporal difference (LSTD) estimate, which can be understood either as a nonparametric instrumental variables method, or as a projected approximation to the Bellman fixed point equation. Our analysis imposes no assumptions on the transition operator of the Markov chain, but rather only conditions on the reward function and population-level kernel LSTD solutions. Using empirical process theory and concentration inequalities, we establish a nonasymptotic upper bound on the error with explicit dependence on the effective horizon H = ( 1 - gamma ) - 1 of the Markov reward process, the eigenvalues of the associated kernel operator, as well as the instance-dependent variance of the Bellman residual error. In addition, we prove minimax lower bounds over subclasses of MRPs, which shows that our guarantees are optimal in terms of the sample size n and the effective horizon H . Whereas existing worst-case theory predicts cubic scaling (H3) in the effective horizon, our theory reveals a much wider range of scalings, depending on the kernel, the stationary distribution, and the variance of the Bellman residual error. Notably, it is only parametric and near-parametric problems that can ever achieve the worst-case cubic scaling.
来源URL: