Global Convergence of Policy Gradient Primal-Dual Methods for Risk-Constrained LQRs

成果类型:
Article
署名作者:
Zhao, Feiran; You, Keyou; Basar, Tamer
署名单位:
Tsinghua University; Tsinghua University; University of Illinois System; University of Illinois Urbana-Champaign
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3234176
发表日期:
2023
页码:
2934-2949
关键词:
Optimization CONVERGENCE COSTS optimal control Lagrangian functions trajectory Search problems Gradient descent Policy optimization (PO) Reinforcement Learning risk-constrained linear quadratic regulator (RC-LQR) stochastic control
摘要:
While the techniques in optimal control theory are often model-based, the policy optimization (PO) approach directly optimizes the performance metric of interest. Even though it has been an essential approach for reinforcement learning problems, there is little theoretical understanding of its performance. In this article, we focus on the risk-constrained linear quadratic regulator problem via the PO approach, which requires addressing a challenging nonconvex constrained optimization problem. To solve it, we first build on our earlier result that an optimal policy has a time-invariant affine structure to show that the associated Lagrangian function is coercive, locally gradient dominated, and has a local Lipschitz continuous gradient, based on which we establish strong duality. Then, we design policy gradient primal-dual methods with global convergence guarantees in both model-based and sample-based settings. Finally, we use samples of system trajectories in simulations to validate our methods.