Online Learning Over Dynamic Graphs via Distributed Proximal Gradient Algorithm

成果类型:
Article
署名作者:
Dixit, Rishabh; Bedi, Amrit Singh; Rajawat, Ketan
署名单位:
Rutgers University System; Rutgers University New Brunswick; Indian Institute of Technology System (IIT System); Indian Institute of Technology (IIT) - Kanpur
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2020.3033712
发表日期:
2021
页码:
5065-5079
关键词:
Heuristic algorithms Signal processing algorithms Convex functions Distributed algorithms Network topology optimization Robot sensing systems distributed optimization dynamic regret online convex optimization sparse signal recovery
摘要:
We consider the problem of tracking the minimum of a time-varying convex optimization problem over a dynamic graph. Motivated by target tracking and parameter estimation problems in intermittently connected robotic and sensor networks, the goal is to design a distributed algorithm capable of handling nondifferentiable regularization penalties. The proposed proximal online gradient descent algorithm is built to run in a fully decentralized manner and utilizes consensus updates over possibly disconnected graphs. The performance of the proposed algorithm is analyzed by developing bounds on its dynamic regret in terms of the cumulative path length of the timevarying optimum. It is shown that as compared to the centralized case, the dynamic regret incurred by the proposed algorithm over T time slots is worse by a factor of log(T) only, despite the disconnected and time-varying network topology. The empirical performance of the proposed algorithm is tested on the distributed dynamic sparse recovery problem, where it is shown to incur a dynamic regret that is close to that of the centralized algorithm.