Neural Operators for Bypassing Gain and Control Computations in PDE Backstepping
成果类型:
Article
署名作者:
Bhan, Luke; Shi, Yuanyuan; Krstic, Miroslav
署名单位:
University of California System; University of California San Diego
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3347499
发表日期:
2024
页码:
5310-5325
关键词:
Backstepping
kernel
Artificial neural networks
stability analysis
Aerospace electronics
PD control
Integral equations
Aerospace control
Backstepping
Lyapunov analysis
Machine Learning
nonlinear control systems
摘要:
We introduce a framework for eliminating the computation of controller gain functions in partial differential equation (PDE) control. We learn the nonlinear operator from the plant parameters to the control gains with a (deep) neural network. We provide closed-loop stability guarantees (global exponential) under an neural network (NN)-approximation of the feedback gains. While, in the existing PDE backstepping, finding the gain kernel requires (one offline) solution to an integral equation, the neural operator (NO) approach we propose learns the mapping from the functional coefficients of the plant PDE to the kernel function by employing a sufficiently high number of offline numerical solutions to the kernel integral equation, for a large enough number of the PDE model's different functional coefficients. We prove the existence of a DeepONet approximation, with arbitrarily high accuracy, of the exact nonlinear continuous operator mapping PDE coefficient functions into gain functions. Once proven to exist, learning of the NO is standard, completed once and for all (never online) and the kernel integral equation does not need to be solved ever again, for any new functional coefficient not exceeding the magnitude of the functional coefficients used for training. We also present an extension from approximating the gain kernel operator to approximating the full feedback law mapping, from plant parameter functions and state measurement functions to the control input, with semiglobal practical stability guarantees. Simulation illustrations are provided and code is available online.(1)(1) https://github.com/lukebhan/NeuralOperatorsForGainKernels This framework, eliminating real-time recomputation of gains, has the potential to be game changing for adaptive control of PDEs and gain scheduling control of nonlinear PDEs. This article requires no prior background in machine learning or neural networks.