An accelerated distributed stochastic gradient method with momentum

成果类型:
Article; Early Access
署名作者:
Huang, Kun; Pu, Shi; Nedic, Angelia
署名单位:
The Chinese University of Hong Kong, Shenzhen; Arizona State University; Arizona State University-Tempe
刊物名称:
MATHEMATICAL PROGRAMMING
ISSN/ISSBN:
0025-5610
DOI:
10.1007/s10107-025-02217-0
发表日期:
2025
关键词:
Optimization CONVERGENCE BEHAVIOR
摘要:
In this paper, we introduce an accelerated distributed stochastic gradient method with momentum for solving the distributed optimization problem, where a group of n agents collaboratively minimize the average of the local objective functions over a connected network. The method, termed Distributed Stochastic Momentum Tracking (DSMT), is a single-loop algorithm that utilizes the momentum tracking technique as well as the Loopless Chebyshev Acceleration (LCA) method. We show that DSMT can asymptotically achieve comparable convergence rates as centralized stochastic gradient descent (SGD) method under a general variance condition regarding the stochastic gradients. Moreover, the number of iterations (transient times) required for DSMT to achieve such rates behaves as O(n(5/3)/(1 - lambda)) for minimizing general smooth objective functions, and O(root n/(1 - lambda)) under the Polyak-& Lstrok;ojasiewicz (PL) condition. Here, the term 1 - lambda denotes the spectral gap of the mixing matrix related to the underlying network topology. Notably, the obtained results do not rely on multiple inter-node communications or stochastic gradient accumulation per iteration, and the transient times are the shortest under the setting to the best of our knowledge.
来源URL: