On the Optimization Landscape of Dynamic Output Feedback Linear Quadratic Control
成果类型:
Article
署名作者:
Duan, Jingliang; Cao, Wenhan; Zheng, Yang; Zhao, Lin
署名单位:
University of Science & Technology Beijing; National University of Singapore; Tsinghua University; University of California System; University of California San Diego; National University of Singapore
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3275732
发表日期:
2024
页码:
920-935
关键词:
Dynamic output feedback
optimization landscape
policy gradient
reinforcement learning (RL)
摘要:
The convergence of policy gradient algorithms hinges on the optimization landscape of the underlying optimal control problem. Theoretical insights into these algorithms can often be acquired from analyzing those of linear quadratic control. However, most of the existing literature only considers the optimization landscape for static full-state or output feedback policies (controllers). In this article, we investigate the more challenging case of dynamic output-feedback policies for linear quadratic regulation (abbreviated as dLQR), which is prevalent in practice but has a rather complicated optimization landscape. We first show how the dLQR cost varies with the coordinate transformation of the dynamic controller, and then, derive the optimal transformation for a given observable stabilizing controller. One of our core results is the uniqueness of the stationary point of dLQR when it is observable, which provides an optimality certificate for solving dynamic controllers using policy gradient methods. Moreover, we establish conditions under which dLQR and linear quadratic Gaussian control are equivalent, thus providing a unified viewpoint of optimal control of both deterministic and stochastic linear systems. These results further shed light on designing policy gradient algorithms for more general decision-making problems with partially observed information.