Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion
成果类型:
Article
署名作者:
Swenson, Brian; Murray, Ryan; Poor, H. Vincent; Kar, Soummya
署名单位:
Pennsylvania Commonwealth System of Higher Education (PCSHE); Pennsylvania State University; Pennsylvania State University - University Park; North Carolina State University; Princeton University; Carnegie Mellon University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3111853
发表日期:
2022
页码:
3949-3964
关键词:
Optimization
CONVERGENCE
linear programming
MANIFOLDS
STANDARDS
Heuristic algorithms
trajectory
distributed optimization
Gradient descent
gradient flow
Nonconvex Optimization
nonsmooth optimization
saddle point
stable manifold
摘要:
The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers optimization of nonsmooth, nonconvex objective functions. It is shown that DGF converges to critical points in this setting. The article then considers the problem of avoiding saddle points. It is shown that if agents' objective functions are assumed to be smooth and nonconvex, then DGF can only converge to a saddle point from a zero-measure set of initial conditions. To establish this result, the article proves a stable manifold theorem for DGF, which is a fundamental contribution of independent interest. In a companion article, analogous results are derived for discrete-time algorithms.