Distributed Variable Sample-Size Stochastic Optimization With Fixed Step-Sizes
成果类型:
Article
署名作者:
Lei, Jinlong; Yi, Peng; Chen, Jie; Hong, Yiguang
署名单位:
Tongji University; Tongji University; Tongji University; Tongji University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2022.3179216
发表日期:
2022
页码:
5630-5637
关键词:
convergence
Stochastic processes
cost function
Convex functions
COSTS
Complexity theory
Distributed algorithms
distributed optimization
multiagent systems
stochastic optimization
variance reduction
摘要:
In this article, we consider distributed stochastic optimization over randomly switching networks, where agents collaboratively minimize the average of all agents' local expectation-valued convex cost functions. Due to the stochasticity in gradient observations, distributedness of local functions, and randomness of communication topologies, distributed algorithms with an exact convergence guarantee under fixed step-sizes have not been achieved yet. This work incorporates variance reduction scheme into the distributed stochastic gradient tracking algorithm, where local gradients are estimated by averaging across a variable number of sampled gradients. With an identically and independently distributed random network, we show that all agents' iterates converge almost surely to the same optimal solution under fixed step-sizes. When the global cost function is strongly convex and the sample size increases at a geometric rate, we prove that the iterates geometrically converge to the unique optimal solution, and establish the iteration, oracle, and communication complexity. The algorithm performance, including rate and complexity analysis, are further investigated with constant step-sizes and a polynomially increasing sample size. Finally, the empirical algorithm performance are illustrated with numerical examples.