Robust Online Learning Over Networks
成果类型:
Article
署名作者:
Bastianello, Nicola; Deplano, Diego; Franceschelli, Mauro; Johansson, Karl H.
署名单位:
Royal Institute of Technology; University of Cagliari
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3441723
发表日期:
2025
页码:
933-946
关键词:
Measurement
CONVERGENCE
computational modeling
training
Distributed databases
Robustness
Numerical models
Asynchronous networks
Distributed Learning
online learning
unreliable communications
摘要:
The recent deployment of multiagent networks has enabled the distributed solution of learning problems, where agents cooperate to train a global model without sharing their local, private data. This work specifically targets some prevalent challenges inherent to distributed learning: 1) online training, i.e., the local data change over time; 2) asynchronous agent computations; 3) unreliable and limited communications; and 4) inexact local computations. To tackle these challenges, we apply the distributed operator theoretical (DOT) version of the alternating direction method of multipliers (ADMM), which we call DOT-ADMM. We prove that if the DOT-ADMM operator is metric subregular, then it converges with a linear rate for a large class of (not necessarily strongly) convex learning problems toward a bounded neighborhood of the optimal time-varying solution, and characterize how such neighborhood depends on 1)-4). We first derive an easy-to-verify condition for ensuring the metric subregularity of an operator, followed by tutorial examples on linear and logistic regression problems. We corroborate the theoretical analysis with numerical simulations comparing DOT-ADMM with other state-of-the-art algorithms, showing that only the proposed algorithm exhibits robustness to 1)-4).