A Hitting Time Analysis for Stochastic Time-Varying Functions With Applications to Adversarial Attacks on Computation of Markov Decision Processes
成果类型:
Article
署名作者:
Yekkehkhany, Ali; Feng, Han; Ying, Donghao; Lavaei, Javad
署名单位:
University of California System; University of California Berkeley
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3311715
发表日期:
2024
页码:
3615-3630
关键词:
Adversarial Markov decision process (MDP)
hitting time
probabilistic Banach fixed-point theorem
probabilistic contraction-expansion mapping
stochastic operators
stochastic time-varying functions
摘要:
Stochastic time-varying optimization is an integral part of learning in which the shape of the function changes over time in a nondeterministic manner. This article considers multiple models of stochastic time variation and analyzes the corresponding notion of hitting time for each model, i.e., the period after which optimizing the stochastic time-varying function reveals informative statistics on the optimization of the target function. The studied models of time variation are motivated by adversarial attacks on the computation of value iteration in Markov decision processes. In this application, the hitting time quantifies the extent that the computation is robust to adversarial disturbances. We develop upper bounds on the hitting time by analyzing the contraction-expansion transformation appearing in the time-variation models. We prove that the hitting time of the value function in the value iteration with a probabilistic contraction-expansion transformation is logarithmic in terms of the inverse of a desired precision. In addition, the hitting time is analyzed for optimization of unknown continuous or discrete time-varying functions whose noisy evaluations are revealed over time. The upper bound for a continuous function is super-quadratic (but subcubic) in terms of the inverse of a desired precision and the upper bound for a discrete function is logarithmic in terms of the cardinality of the function domain. Improved bounds for convex functions are obtained and we show that such functions are learned faster than nonconvex functions. Finally, we study a time-varying linear model with additive noise, where hitting time is bounded with the notion of shape dominance.