Linear Convergence of First- and Zeroth-Order Primal-Dual Algorithms for Distributed Nonconvex Optimization
成果类型:
Article
署名作者:
Yi, Xinlei; Zhang, Shengjun; Yang, Tao; Chai, Tianyou; Johansson, Karl H.
署名单位:
Royal Institute of Technology; University of North Texas System; University of North Texas Denton; Northeastern University - China
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3108501
发表日期:
2022
页码:
4194-4201
关键词:
convergence
cost function
Convex functions
COSTS
Technological innovation
Lyapunov methods
Laplace equations
Distributed nonconvex optimization
first-order algorithm
linear convergence
primal-dual algorithm
zeroth-order algorithm
摘要:
This article considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange. We first consider a distributed first-order primal-dual algorithm. We show that it converges sublinearly to a stationary point if each local cost function is smooth and linearly to a global optimum under an additional condition that the global cost function satisfies the Polyak-Lojasiewicz condition. This condition is weaker than strong convexity, which is a standard condition for proving linear convergence of distributed optimization algorithms, and the global minimizer is not necessarily unique. Motivated by the situations where the gradients are unavailable, we then propose a distributed zeroth-order algorithm, derived from the considered first-order algorithm by using a deterministic gradient estimator, and show that it has the same convergence properties as the considered first-order algorithm under the same conditions. The theoretical results are illustrated by numerical simulations.