Adaptive Control via Lyapunov-Based Deep Long Short-Term Memory Networks
成果类型:
Article
署名作者:
Shen, Xuehui; Griffis, Emily J.; Wu, Wenyu; Dixon, Warren E.
署名单位:
State University System of Florida; University of Florida
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2025.3558376
发表日期:
2025
页码:
6199-6205
关键词:
Long short term memory
Computer architecture
Function approximation
stability analysis
Adaptation models
Feedforward systems
training
Real-time systems
Logic gates
vectors
Adaptive control
long short-term memory networks (LSTMs)
Lyapunov methods
Neural Networks
nonlinear control systems
摘要:
Motivated by the memory capabilities of long short-term memory (LSTM) networks and the improved function approximation power of deep learning, this article develops a Lyapunov-based adaptive controller using a deep LSTM neural network (NN) architecture. The architecture is made deep by stacking the LSTM cells on top of each other, and therefore, the overall architecture is henceforth referred to as a stacked LSTM (SLSTM). Specifically, an adaptive SLSTM architecture is developed with shortcut connections and is implemented in the controller as a feedforward estimate. Analytical adaptive laws derived from a Lyapunov-based stability analysis update the SLSTM weights in real-time and allow the SLSTM estimate to approximate the unknown drift dynamics. A Lyapunov-based stability analysis ensures asymptotic tracking error convergence for the developed Lyapunov-based stacked LSTM (Lb-SLSTM) controller and weight adaptation law. The Lb-SLSTM adaptive controller yielded an average improvement of 22.24% and 70.01% in tracking error performance, as well as 40.16% and 81.32% in function approximation error performance when compared to the baseline Lb-LSTM and Lyapunov-based deep neural network (Lb-DNN) architectures, respectively. Furthermore, the Lb-SLSTM model yielded a 96.00% and 98.75% improvement in maximum steady state error performance when compared to the Lb-LSTM and Lb-DNN models, respectively.