Reinforcement Learning of Structured Stabilizing Control for Linear Systems With Unknown State Matrix

成果类型:
Article
署名作者:
Mukherjee, Sayak; Vu, Thanh Long
署名单位:
United States Department of Energy (DOE); Pacific Northwest National Laboratory
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2022.3155384
发表日期:
2023
页码:
1746-1752
关键词:
Heuristic algorithms computational modeling optimal control Feedback control dynamical systems Adaptation models trajectory Distributed control linear quadratic regulator (LQR) reinforcement learning (RL) stability guarantee structured learning
摘要:
This article delves into designing stabilizing feedback control gains for continuous-time linear systems with unknown state matrix, in which the control gain is subjected to a structural constraint. We bring forth the ideas from reinforcement learning (RL) in conjunction with sufficient stability and performance guarantees in order to design these structured gains using the trajectory measurements of states and controls. We first formulate a model-based linear quadratic regulator (LQR) framework to compute the structured control gain. Subsequently, we transform this model-based LQR formulation into a data-driven RL algorithm to remove the need for knowing the system state matrix. Theoretical guarantees are provided for the stability of the closed-loop system and the convergence of the structured RL (SRL) algorithm. A remarkable application of the proposed SRL framework is in designing distributed static feedback control, which is necessary for automatic control of many large-scale cyber-physical systems. As such, we validate our theoretical results with numerical simulations on a multiagent networked linear time-invariant dynamical system.
来源URL: