Decentralized Online Regularized Learning Over Random Time-Varying Graphs
成果类型:
Article
署名作者:
Zhang, Xiwei; Li, Tao; Fu, Xiaozheng
署名单位:
East China Normal University; East China Normal University; Chinese Academy of Sciences; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CAS; Ningbo University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2025.3563542
发表日期:
2025
页码:
6609-6624
关键词:
estimation
Linear Regression
mathematical models
Symmetric matrices
noise
vectors
estimation error
Technological innovation
Noise measurement
CONVERGENCE
Decentralized online linear regression
random time-varying graph
Regret Analysis
regularization
persistence of excitation
摘要:
We study the decentralized online regularized linear regression algorithm over random time-varying graphs. At each time step, every node runs an online estimation algorithm consisting of an innovation term processing its own new measurement, a consensus term taking a weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises and a regularization term preventing over-fitting. It is not required that the regression matrices and graphs satisfy special statistical assumptions such as mutual independence, spatio-temporal independence or stationarity. We develop the nonnegative supermartingale inequality of the estimation error, and prove that the estimations of all nodes converge to the unknown true parameter vector almost surely if the algorithm gains, graphs and regression matrices jointly satisfy the sample path spatio-temporal persistence of excitation condition. Especially, this condition holds by choosing appropriate algorithm gains if the graphs are uniformly conditionally jointly connected and conditionally balanced, and the regression models of all nodes are uniformly conditionally spatio-temporally jointly observable, under which the algorithm converges in mean square and almost surely. In addition, we prove that the regret upper bound is O(T(1-tau )ln T), where tau is an element of (0.5,1) is a constant depending on the algorithm gains.