Online Distributed Optimization With Nonconvex Objective Functions via Dynamic Regrets

成果类型:
Article
署名作者:
Lu, Kaihong; Wang, Long
署名单位:
Shandong University of Science & Technology; Peking University; Peking University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3239432
发表日期:
2023
页码:
6509-6524
关键词:
Dynamic regrets multiagent networks Nonconvex Optimization online distributed optimization
摘要:
In this article, the problem of online distributed optimization subject to a convex set is studied by employing a network of agents, where the objective functions allocated to agents are nonconvex. Each agent only has access to its own objective function information at the previous time, and can only communicate with its immediate neighbors via a time-varying directed graph. To tackle this problem, first, a new online distributed algorithm with gradient information is proposed based on consensus algorithms and projection-free strategies. Of particular interest is that dynamic regrets, whose offline benchmarks are to pursue the stationary points at each time, are employed to measure the performance of the algorithm. Under mild assumptions on the graph and the objective functions, we prove that if the deviation in the objective function sequence is sublinear with the square root of the time horizon, and if the deviation in the gradient sequence is sublinear with the time horizon, then dynamic regrets grow sublinearly. Second, considering the case where the gradient information of the objective functions is not available, we propose a zeroth-order online distributed projection-free algorithm, by which agents make decisions only depending on the random zeroth-order oracle. It turns out that under the same conditions as in the first case, if the smoothing parameters in the random zeroth-order oracles scale inversely with the time horizon, then the expectations of dynamic regrets increase sublinearly. Finally, simulations are presented to demonstrate the effectiveness of our theoretical results.
来源URL: