A Fast Randomized Incremental Gradient Method for Decentralized Nonconvex Optimization
成果类型:
Article
署名作者:
Xin, Ran; Khan, Usman A.; Kar, Soummya
署名单位:
Carnegie Mellon University; Tufts University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3122586
发表日期:
2022
页码:
5150-5165
关键词:
convergence
gradient methods
optimization
Linear matrix inequalities
Stochastic processes
Robustness
Radio frequency
Decentralized nonconvex optimization
incremental gradient methods
variance reduction
摘要:
In this article, we study decentralized nonconvex finite-sum minimization problems described over a network of nodes, where each node possesses a local batch of data samples. In this context, we analyze a single-timescale randomized incremental gradient method, called GT-SAGA. GT-SAGA is computationally efficient as it evaluates one component gradient per node per iteration and achieves provably fast and robust performance by leveraging node-level variance reduction and network-level gradient tracking. For general smooth nonconvex problems, we show the almost sure and mean-squared convergence of GT-SAGA to a first-order stationary point and further describe regimes of practical significance, where it outperforms the existing approaches and achieves a network topology-independent iteration complexity, respectively. When the global function satisfies the Polyak-Lojaciewisz condition, we show that GT-SAGA exhibits linear convergence to an optimal solution in expectation and describe regimes of practical interest where the performance is network topology independent and improves upon the existing methods. Numerical experiments are included to highlight the main convergence aspects of GT-SAGA in nonconvex settings.
来源URL: