Distributed Stochastic Optimization With Unbounded Subgradients Over Randomly Time-Varying Networks

成果类型:
Article
署名作者:
Chen, Yan; Fradkov, Alexander L.; Fu, Keli; Fu, Xiaozheng; Li, Tao
署名单位:
East China Normal University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3525182
发表日期:
2025
页码:
4008-4015
关键词:
NOISE cost function Noise measurement Linear matrix inequalities Additives Symmetric matrices Stochastic processes CONVERGENCE vectors Upper bound Additive and multiplicative communication noise distributed stochastic convex optimization random graph subgradient
摘要:
Motivated by distributed statistical learning over uncertain communication networks, we study distributed stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. The network is modeled by a sequence of time-varying random digraphs with each node representing a local optimizer and each edge representing a communication link. In this article, we consider the distributed subgradient optimization algorithm with noisy measurements of local cost functions' subgradients, additive, and multiplicative noises among information exchanging between each pair of nodes. By the stochastic Lyapunov method, convex analysis, algebraic graph theory, and martingale convergence theory, we prove that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes' states converge to the global optimal solution almost surely.