Online Reinforcement Learning of Optimal Threshold Policies for Markov Decision Processes

成果类型:
Article
署名作者:
Roy, Arghyadip; Borkar, Vivek; Karandikar, Abhay; Chaporkar, Prasanna
署名单位:
University of Illinois System; University of Illinois Urbana-Champaign; Indian Institute of Technology System (IIT System); Indian Institute of Technology (IIT) - Bombay; Indian Institute of Technology System (IIT System); Indian Institute of Technology (IIT) - Kanpur
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3108121
发表日期:
2022
页码:
3722-3729
关键词:
convergence Markov processes computational complexity computational modeling simulation Reinforcement Learning PROCESS CONTROL Markov decision process (MDP) online learning of threshold policies reinforcement learning (RL) stochastic approximation (SA) algorithms stochastic control
摘要:
To overcome the curses of dimensionality and modeling of dynamic programming methods to solve Markov decision process problems, reinforcement learning (RL) methods are adopted in practice. Contrary to traditional RL algorithms, which do not consider the structural properties of the optimal policy, we propose a structure-aware learning algorithm to exploit the ordered multithreshold structure of the optimal policy, if any. We prove the asymptotic convergence of the proposed algorithm to the optimal policy. Due to the reduction in the policy space, the proposed algorithm provides remarkable improvements in storage and computational complexities over classical RL algorithms. Simulation results establish that the proposed algorithm converges faster than other RL algorithms.