Control Law Learning Based on LQR Reconstruction With Inverse Optimal Control

成果类型:
Article
署名作者:
Qu, Chendi; He, Jianping; Duan, Xiaoming
署名单位:
Shanghai Jiao Tong University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3469788
发表日期:
2025
页码:
1350-1357
关键词:
trajectory optimization linear programming estimation Object recognition Mobile agents mathematical models sensitivity analysis Search problems robots Inverse optimal control Linear systems LQR control
摘要:
Designing controllers to generate various trajectories has been studied for years, while recently, recovering an optimal controller from trajectories receives increasing attention. In this article, we reveal that the inherent linear quadratic regulator (LQR) problem of a moving agent can be reconstructed based on its trajectory observations only, which enables one to learn the control law of the target agent autonomously. Specifically, we propose a novel inverse optimal control method to identify the weighting matrices of a discrete-time finite horizon LQR, and we also provide the corresponding identifiability conditions. Then, we obtain the optimal estimate of the control horizon using binary search, and finally, reconstruct the LQR problem with aforementioned estimates. The strength of the learning control law with optimization problem recovery lies in less computation consumption and strong generalization ability. We apply our algorithm to the future control input prediction and the discrepancy loss is further derived. Simulations and hardware experiments on a self-designed robot platform illustrate the effectiveness of our work.