Stochastic First-Order Methods for Average-Reward Markov Decision Processes

成果类型:
Article; Early Access
署名作者:
Li, Tianjiao; Wu, Feiyang; Lan, Guanghui
署名单位:
University System of Georgia; Georgia Institute of Technology; University System of Georgia; Georgia Institute of Technology
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2022.0241
发表日期:
2024
关键词:
variational-inequalities learning algorithms Policy Evaluation reinforcement CONVERGENCE
摘要:
We study average-reward Markov decision processes (AMDPs) and develop novel first-order methods with strong theoretical guarantees for both policy optimization and policy evaluation. Compared with intensive research efforts in finite sample analysis of policy gradient methods for discounted MDPs, existing studies on policy gradient methods for AMDPs mostly focus on regret bounds under restrictive assumptions, and they often lack guarantees on the overall sample complexities. Toward this end, we develop an average-reward stochastic policy mirror descent method for solving AMDPs with and without regularizers and provide convergence guarantees in terms of the long-term average reward. For policy evaluation, existing on-policy methods suffer from suboptimal convergence rates as well as failure in handling insufficiently random policies due to the lack of exploration in the action space. To remedy these issues, we develop a variance-reduced temporal difference (VRTD) method with linear function approximation for randomized policies along with optimal convergence guarantees, and design an exploratory VRTD method that resolves the exploration issue and provides comparable convergence guarantees. By combining the policy evaluation and policy optimization parts, we establish sample complexity results for solving AMDPs under both generative and Markovian noise models. It is worth noting that when linear function approximation is utilized, our algorithm only needs to update in the low-dimensional parameter space, and thus can handle MDPs with large state and action spaces.