On Minimizing Total Discounted Cost in MDPs Subject to Reachability Constraints
成果类型:
Article
署名作者:
Savas, Yagiz; Verginis, Christos K.; Hibbard, Michael; Topcu, Ufuk
署名单位:
University of Texas System; University of Texas Austin
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3384834
发表日期:
2024
页码:
6466-6473
关键词:
costs
Probabilistic logic
Task analysis
PLANNING
Approximation algorithms
trajectory
Markov decision processes
discounting
Markov decision processes (MDPs)
optimization
reachability
摘要:
In this article, we study the synthesisof a policy in a Markov decision process (MDP) following which an agent reaches a target state in the MDP while minimizing its total discounted cost. The problem combines a reachability criterion with a discounted cost criterion and naturally expresses the completion of a task with probabilistic guarantees and optimal transient performance. We first establish that an optimal policy for the considered formulation may not exist but that there always exists a near-optimal stationary policy. We additionally provide a necessary and sufficient condition for the existence of an optimal policy. We then restrict our attention to stationary deterministic policies and show that the decision problem associated with the synthesis of an optimal stationary deterministic policy is NP-complete. Finally, we provide an exact algorithm based on mixed-integer linear programming and propose an efficient approximation algorithm based on linear programming for the synthesis of an optimal stationary deterministic policy.