Improving the Transient Times for Distributed Stochastic Gradient Methods

成果类型:
Article
署名作者:
Huang, Kun; Pu, Shi
署名单位:
The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data; The Chinese University of Hong Kong, Shenzhen; The Chinese University of Hong Kong, Shenzhen; Shenzhen Institute of Artificial Intelligence & Robotics for Society
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2022.3201141
发表日期:
2023
页码:
4127-4142
关键词:
Distributed optimization stochastic gradi-ent methods Convex Optimization
摘要:
We consider the distributed optimization problem where n agents, each possessing a local cost function, collaboratively minimize the average of the n cost functions over a connected network. Assuming stochastic gradient information is available, we study a distributed stochastic gradient algorithm, called exact diffusion with adaptive stepsizes (EDAS) adapted from the Exact Diffusion method (Yuan et al., 2019) and NIDS (Li et al., 2019) and perform a nonasymptotic convergence analysis. We not only show that EDAS asymptotically achieves the same network independent convergence rate as centralized stochastic gradient descent for minimizing strongly convex and smooth objective functions, but also characterize the transient time needed for the algorithm to approach the asymptotic convergence rate, which behaves as K-T = O ( n/1-?(2) ), where 1 - ?(2) stands for the spectral gap of the mixing matrix. To the best of our knowledge, EDAS achieves the shortest transient time when the average of the n cost functions is strongly convex and each cost function is smooth. Numerical simulations further corroborate and strengthen the obtained theoretical results.