Stabilization of Nonlinear Discrete-Time Systems to Target Measures Using Stochastic Feedback Laws
成果类型:
Article
署名作者:
Biswal, Shiba; Elamvazhuthi, Karthik; Berman, Spring
署名单位:
University of California System; University of California Los Angeles; Arizona State University; Arizona State University-Tempe
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2020.3002971
发表日期:
2021
页码:
1957-1972
关键词:
Aerospace electronics
Markov processes
games
CONVERGENCE
Mathematical model
kernel
nonlinear control systems
decentralized control
discrete-time Markov processes
multiagent systems
Swarm robotics
optimization
摘要:
In this article, we address the problem of stabilizing a discrete-time deterministic nonlinear control system to a target invariant measure using time-invariant stochastic feedback laws. This problem can be viewed as an extension of the problem of designing the transition probabilities of a Markov chain so that the process is exponentially stabilized to a target stationary distribution. Alternatively, it can be seen as an extension of the classical control problem of asymptotically stabilizing a discrete-time system to a single point, which corresponds to the Dirac measure in the measure stabilization framework. We assume that the target measure is supported on the entire state space of the system and is absolutely continuous with respect to the Lebesgue measure. Under the condition that the system is locally controllable at every point in the state space within one time step, we show that the associated measure stabilization problem is well-posed. Given this well-posedness result, we then frame an infinite-dimensional convex optimization problem to construct feedback control laws that stabilize the system to a target invariant measure at a maximized rate of convergence. We validate our optimization approach with numerical simulations of two-dimensional linear and nonlinear discrete-time control systems.
来源URL: