A Sensitivity-Based Data Augmentation Framework for Model Predictive Control Policy Approximation
成果类型:
Article
署名作者:
Krishnamoorthy, Dinesh
署名单位:
Harvard University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3124983
发表日期:
2022
页码:
6090-6097
关键词:
Training data
training
optimization
optimal control
sensitivity
Deep learning
Approximation algorithms
Predictive control
learning-based control
data augmentation
parametric optimization
摘要:
Approximating model predictive control (MPC) policy using expert-based supervised learning techniques requires labeled training datasets sampled from the MPC policy. This is typically obtained by sampling the feasible state space and evaluating the control law by solving the numerical optimization problem offline for each sample. Although the resulting approximate policy can be cheaply evaluated online, generating large training samples to learn the MPC policy can be time-consuming and prohibitively expensive. This is one of the fundamental bottlenecks that limit the design and implementation of MPC policy approximation. This technical article aims to address this challenge, and proposes a novel sensitivity-based data augmentation scheme for direct policy approximation. The proposed approach is based on exploiting the parametric sensitivities to cheaply generate additional training samples in the neighborhood of the existing samples.