Online Distributed Stochastic Gradient Algorithm for Nonconvex Optimization With Compressed Communication

成果类型:
Article
署名作者:
Li, Jueyou; Li, Chaojie; Fan, Jing; Huang, Tingwen
署名单位:
Chongqing Normal University; North China Electric Power University; Central South University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3327183
发表日期:
2024
页码:
936-951
关键词:
Bandit-feedback compressed communication distributed optimization (DO) Nonconvex Optimization online optimization stochastic approximation
摘要:
This article examines an online distributed optimization problem over an unbalanced digraph, in which a group of nodes in the network tries to collectively search for a minimizer of a time-varying global cost function while data are distributed among computing nodes. As the problem size becomes large, it will inevitably suffer from the communication bottleneck since each node that exchanges messages potentially transmits large amounts of information to its neighbors. To handle the issue, we design an online stochastic gradient algorithm with compressed communication when the knowledge of the gradient is available. We obtain the regret bounds for both nonconvex and convex cost functions, which can reach almost the same order of classic distributed optimization algorithms with exact communication. To resolve the scenario when the information of gradients is not accessible, a bandit version of the previous algorithm is then proposed. Explicit regret bounds of the bandit algorithm are also established for both nonconvex and convex cost functions. The result reveals that the performance of the bandit-feedback method is almost close to that of the gradient-feedback method. Several numerical experiments corroborate the main theoretical findings obtained in this article and exemplify a remarkable speedup when compared to existing distributed algorithms with exact communication.