Robust Output Regulation and Reinforcement Learning-Based Output Tracking Design for Unknown Linear Discrete-Time Systems

成果类型:
Article
署名作者:
Chen, Ci; Xie, Lihua; Jiang, Yi; Xie, Kan; Xie, Shengli
署名单位:
Guangdong University of Technology; Nanyang Technological University; City University of Hong Kong; Guangdong University of Technology
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2022.3172590
发表日期:
2023
页码:
2391-2398
关键词:
Regulation optimal control system dynamics STANDARDS trajectory PROCESS CONTROL Output feedback Adaptive optimal control output tracking reinforcement learning (RL) robust output regulation
摘要:
In this article, we investigate the optimal output tracking problem for linear discrete-time systems with unknown dynamics using reinforcement learning (RL) and robust output regulation theory. This output tracking problem only allows to utilize the outputs of the reference system and the controlled system, rather than their states, and differs from most existing works that depend on the state of the system. The optimal tracking problem is formulated into a linear quadratic regulation problem by proposing a family of dynamic discrete-time controllers. Then, it is shown that solving the output tracking problem is equivalent to solving output regulation equations, whose solution, however, requires the knowledge of the complete and accurate system dynamics. To remove such a requirement, an off-policy RL algorithm is proposed using only the measured output data along the trajectory of the system and the reference output. By introducing reexpression error and analyzing the rank condition of the parameterization matrix, we ensure the uniqueness of the proposed RL-based optimal control via output feedback.