Stochastic Policy Gradient Ascent in Reproducing Kernel Hilbert Spaces

成果类型:
Article
署名作者:
Paternain, Santiago; Bazerque, Juan Andres; Small, Austin; Ribeiro, Alejandro
署名单位:
University of Pennsylvania; Universidad de la Republica, Uruguay
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2020.3029317
发表日期:
2021
页码:
3429-3444
关键词:
STOCHASTIC PROCESSES kernel CONVERGENCE Complexity theory trajectory Hilbert space STANDARDS Autonomous systems gradient methods Markov processes unsepervised Learning
摘要:
Reinforcement learning consists of finding policies that maximize an expected cumulative long-term reward in a Markov decision process with unknown transition probabilities and instantaneous rewards. In this article, we consider the problem of finding such optimal policies while assuming they are continuous functions belonging to a reproducing kernel Hilbert space (RKHS). To learn the optimal policy, we introduce a stochastic policy gradient ascent algorithm with the following three unique novel features. First, the stochastic estimates of policy gradients are unbiased. Second, the variance of stochastic gradients is reduced by drawing on ideas from numerical differentiation. Four, policy complexity is controlled using sparse RKHS representations. Novel feature, first, is instrumental in proving convergence to a stationary point of the expected cumulative reward. Novel feature, second, facilitates reasonable convergence times. Novel feature, third, is a necessity in practical implementations, which we show can be done in a way that does not eliminate convergence guarantees. Numerical examples in standard problems illustrate successful learning of policies with low complexity representations, which are close to stationary points of the expected cumulative reward.