A First-Order Approach to Accelerated Value Iteration
成果类型:
Article
署名作者:
Goyal, Vineet; Grand-Clement, Julien
署名单位:
Columbia University
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2022.2269
发表日期:
2023
页码:
517-535
关键词:
policy-iteration
Markov
algorithms
摘要:
Markov decision processes (MDPs) are used to model stochastic systems in many applications. Several efficient algorithms to compute optimal policies have been studied in the literature, including value iteration (VI) and policy iteration. However, these do not scale well, especially when the discount factor for the infinite horizon discounted reward, lambda, gets close to one. In particular, the running time scales as O (1=(1 - lambda)) for these algorithms. In this paper, our goal is to design new algorithms that scale better than previous approaches when. approaches 1. Ourmain contribution is to present a connection between VI and gradient descent and adapt the ideas of acceleration and momentum in convex optimization to design faster algorithms forMDPs. We prove theoretical guarantees of faster convergence of our algorithms for the computation of the value function of a policy, where the running times of our algorithms scale as O (1 root 1 - lambda) for reversible MDP instances. The improvement is quite analogous to Nesterov's acceleration and momentum in convex optimization. We also provide a lower bound on the convergence properties of any first-order algorithmfor solving MDPs, presenting a family of MDPs instances for which no algorithm can converge faster than VI when the number of iterations is smaller than the number of states. We introduce safe accelerated value iteration (S-AVI), which alternates between accelerated updates and value iteration updates. Our algorithm S-AVI is worst-case optimal and retains the theoretical convergence properties of VIwhile exhibiting strong empirical performances and providing significant speedupswhen comparedwith classical approaches (up to one order ofmagnitude inmany cases) for a large test bed ofMDP instances.
来源URL: