Model Approximation in MDPs With Unbounded Per-Step Cost
成果类型:
Article
署名作者:
Bozkurt, Berk; Mahajan, Aditya; Nayyar, Ashutosh; Ouyang, Yi
署名单位:
McGill University; University of Southern California
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2025.3532181
发表日期:
2025
页码:
4624-4639
关键词:
costs
dynamic programming
Upper bound
stability analysis
kernel
cost function
Aerospace electronics
Weight measurement
Random variables
mathematical models
Bellman operators
integral probability metrics (IPM)
Markov decision processes (MDPs)
model approximation
摘要:
In this article, we consider the problem of designing a control policy for an infinite-horizon discounted cost Markov decision process M when we only have access to an approximate model M<^>. How well does an optimal policy pi<^>(star) of the approximate model perform when used in the original model M? We answer this question by bounding a weighted norm of the difference between the value function of pi<^>(star) when used in M and the optimal value function of M. We then extend our results and obtain potentially tighter upper bounds by considering affine transformations of the per-step cost. We further provide upper bounds that explicitly depend on the weighted distance between cost functions and weighted distance between transition kernels of the original and approximate models. We present examples to illustrate our results.