Decentralized Control of Multiagent Systems Using Local Density Feedback
成果类型:
Article
署名作者:
Biswal, Shiba; Elamvazhuthi, Karthik; Berman, Spring
署名单位:
University of California System; University of California Los Angeles; Arizona State University; Arizona State University-Tempe
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3109520
发表日期:
2022
页码:
3920-3932
关键词:
Decentralized control
discrete-time Markov processes
Multi-agent systems
Probability density function
摘要:
In this article, we stabilize a discrete-time Markov process evolving on a compact subset of Rd to an arbitrary target distribution that has an L-infinity(.) density and does not necessarily have a connected support on the state space. We address this problem by stabilizing the corresponding Kolmogorov forward equation, the mean-field model of the system, using a density-dependent transition Kernel as the control parameter. Our main application of interest is controlling the distribution of a multiagent system in which each agent evolves according to this discrete-time Markov process. To prevent agent state transitions at the equilibrium distribution, which would potentially waste energy, we show that the Markov process can be constructed in such a way that the operator that pushes forward measures is the identity at the target distribution. In order to achieve this, the transition kernel is defined as a function of the current agent distribution, resulting in a nonlinear Markov process. Moreover, we design the transition kernel to be decentralized in the sense that it depends only on the local density measured by each agent. We prove the existence of such a decentralized control law that globally stabilizes the target distribution. Furthermore, to implement our control approach on a finite N-agent system, we smoothen the mean-field dynamics via the process of mollification. We validate our control law with numerical simulations of multiagent systems with different population sizes. We observe that as N increases, the agent distribution in the N-agent simulations converges to the solution of the mean-field model, and the number of agent state transitions at equilibrium decreases to zero.