Q-Learning With Uniformly Bounded Variance
成果类型:
Article
署名作者:
Devraj, Adithya M.; Meyn, Sean P.
署名单位:
Stanford University; State University System of Florida; University of Florida
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3133184
发表日期:
2022
页码:
5948-5963
关键词:
complexity theory
CONVERGENCE
Approximation algorithms
Upper bound
Reinforcement Learning
COSTS
STANDARDS
Q-learning
reinforcement learning (RL)
stochastic optimal control
摘要:
Sample complexity bounds are a common performance metric in the reinforcement learning literature. In the discounted cost, infinite horizon setting, all of the known bounds can be arbitrarily large, as the discount factor approaches unity. These results seem to imply that a very large number of samples is required to achieve an epsilon-optimal policy. The objective of the present work is to introduce a new class of algorithms that have sample complexity uniformly bounded over all discount factors. One may argue that this is impossible, due to a recent min-max lower bound. The explanation is that these prior bounds concern value function approximation and not policy approximation. We show that the asymptotic covariance of the tabular Q-learning algorithm with an optimized step-size sequence is a quadratic function of a factor that goes to infinity, as discount factor approaches 1; an essentially known result. The new relative Q-learning algorithm proposed here is shown to have asymptotic covariance that is uniformly bounded over all discount factors.