Stochastic MPC With Dynamic Feedback Gain Selection and Discounted Probabilistic Constraints
成果类型:
Article
署名作者:
Yan, Shuhao; Goulart, Paul J.; Cannon, Mark
署名单位:
Cornell University; University of Oxford
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3128466
发表日期:
2022
页码:
5885-5899
关键词:
costs
Chebyshev approximation
Stochastic processes
Additives
probability distribution
uncertainty
cost function
chance constraints
Chebyshev inequality
dynamic programming
model predictive control (MPC)
multiobjective optimization
stochastic convergence
摘要:
This article considers linear discrete-time systems with additive disturbances and designs a model predictive control (MPC) law incorporating a dynamic feedback gain to minimize a quadratic cost function subject to a single chance constraint. The feedback gain is selected online, and we provide two selection methods based on minimizing upper bounds on predicted costs. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalizing violation probabilities close to the initial time and assigning violation probabilities in the far future with vanishingly small weights, this form of constraints allows for an MPC law with guarantees of recursive feasibility without a boundedness assumption on the disturbance. A computationally convenient MPC optimization problem is formulated using Chebyshev's inequality, and we introduce an online constraint-tightening technique to ensure recursive feasibility. The closed-loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition. With dynamic feedback gain selection, the closed-loop cost is reduced and conservativeness of Chebyshev's inequality is mitigated. Also, a larger feasible set of initial conditions can be obtained. Numerical simulations are given to show these results.