Learning for Control: L1-Error Bounds for Kernel-Based Regression

成果类型:
Article
署名作者:
Bisiacco, Mauro; Pillonetto, Gianluigi
署名单位:
University of Padua
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3372882
发表日期:
2024
页码:
6530-6545
关键词:
Kernel Hilbert space stability analysis dynamical systems CONVERGENCE uncertainty Sufficient conditions BIBO stability kernel-based regularization Linear systems learning theory stable reproducing kernel Hilbert spaces
摘要:
We consider functional regression models with noisy outputs resulting from linear transformations. In the setting of regularization theory in reproducing kernel Hilbert spaces (RKHSs), much work has been devoted to build uncertainty bounds around kernel-based estimates, hence characterizing their convergence rates. Such results are typically formulated using either the average squared loss for the prediction or the RKHS norm. However, in signal processing and in emerging areas, such as learning for control, measuring the estimation error through the L-1 norm is often more advantageous. This can, e.g., provide insights on the convergence rate in the Laplace/Fourier domain whose role is crucial in the analysis of dynamical systems. For this reason, we consider all the RKHSs H associated with Lebesgue measurable positive-definite kernels, which induce subspaces of L-1, also known as stable RKHSs in the literature. The inclusion H subset of L-1 is then characterized. This permits to convert all the error bounds, which depend on the RKHS norm in terms of the L-1 norm. We also show that our result is optimal: there does not exist any better reformulation of the bounds in L-1 than the one presented here.
来源URL: