S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication

成果类型:
Article
署名作者:
Iakovidou, Charikleia; Wei, Ermin
署名单位:
Northwestern University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2022.3151734
发表日期:
2023
页码:
1281-1287
关键词:
Optimization CONVERGENCE Distributed algorithms Radio frequency Approximation algorithms Quantization (signal) Probabilistic logic distributed optimization network optimization QUANTIZATION stochastic optimization
摘要:
We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexible, distributed first-order algorithms that allow for the tradeoff of computation and communication to best accommodate the application setting. We assume that the information exchanged between nodes is subject to random distortion and that only stochastic approximations of the true gradients are available. Our theoretical results prove that the proposed algorithm converges linearly in expectation to a neighborhood of the optimal solution for strongly convex objective functions with Lipschitz gradients. We characterize the dependence of this neighborhood on algorithm and network parameters, the quality of the communication channel and the precision of the stochastic gradient approximations used. Finally, we provide numerical results to evaluate the empirical performance of our method.
来源URL: