Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear-Quadratic Regulator Problem
成果类型:
Article
署名作者:
Mohammadi, Hesameddin; Zare, Armin; Soltanolkotabi, Mahdi; Jovanovic, Mihailo R.
署名单位:
University of Southern California; University of Texas System; University of Texas Dallas
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3087455
发表日期:
2022
页码:
2435-2450
关键词:
data-driven control
Gradient descent
gradient-flow dynamics
linear-quadratic regulator (LQR)
model-free control
Nonconvex Optimization
摘要:
Model-free reinforcement learning attempts to find an optimal control action for an unknown dynamical system by directly searching over the parameter space of controllers. The convergence behavior and statistical properties of these approaches are often poorly understood because of the nonconvex nature of the underlying optimization problems and the lack of exact gradient computation. In this article, we take a step toward demystifying the performance and efficiency of such methods by focusing on the standard infinite-horizon linear-quadratic regulator problem for continuous-time systems with unknown state-space parameters. We establish exponential stability for the ordinary differential equation (ODE) that governs the gradient-flow dynamics over the set of stabilizing feedback gains and show that a similar result holds for the gradient descent method that arises from the forward Euler discretization of the corresponding ODE. We also provide theoretical bounds on the convergence rate and sample complexity of the random search method with two-point gradient estimates. We prove that the required simulation time for achieving epsilon-accuracy in the model-free setup and the total number of function evaluations both scale as log (1/epsilon).