A Simple Finite-Time Analysis of TD Learning With Linear Function Approximation

成果类型:
Article
署名作者:
Mitra, Aritra
署名单位:
North Carolina State University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3469328
发表日期:
2025
页码:
1388-1394
关键词:
Temporal difference learning Approximation algorithms STANDARDS Function approximation CONVERGENCE Perturbation methods noise vectors Heuristic algorithms DELAYS finite-time analysis Reinforcement Learning stochastic approximation Temporal Difference Learning
摘要:
We study the finite-time convergence of temporal- difference (TD) learning with linear function approximation under Markovian sampling. Existing proofs for this setting either assume a projection step in the algorithm to simplify the analysis, or require a fairly intricate argument to ensure stability of the iterates. We ask: Is it possible to retain the simplicity of a projection-based analysis without actually performing a projection step in the algorithm? Our main contribution is to show this is possible via a novel two-step argument. In the first step, we use induction to prove that under a standard choice of a constant step-size alpha , the iterates generated by TD learning remain uniformly bounded in expectation. In the second step, we establish a recursion that mimics the steady-state dynamics of TD learning up to a bounded perturbation on the order of O ( alpha(2) ) that captures the effect of Markovian sampling. Combining these pieces leads to an overall approach that considerably simplifies existing proofs. We conjecture that our inductive proof technique will find applications in the analyses of more complex stochastic approximation algorithms, and conclude by providing some examples of such applications.