Distributed Derivative-Free Learning Method for Stochastic Optimization Over a Network With Sparse Activity
成果类型:
Article
署名作者:
Li, Wenjie; Assaad, Mohamad; Zheng, Shiqi
署名单位:
Universite Paris Saclay; Huawei Technologies; Universite Paris Saclay; Centre National de la Recherche Scientifique (CNRS); China University of Geosciences
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3077516
发表日期:
2022
页码:
2221-2236
关键词:
Optimization
CONVERGENCE
Stochastic processes
Perturbation methods
linear programming
Convex functions
tools
convergence analysis
derivative-free learning
Distributed algorithm
sparse network
stochastic optimization
摘要:
This article addresses a distributed optimization problem in a communication network where nodes are active sporadically. Each active node applies some learning method to control its action to maximize the global utility function, which is defined as the sum of the local utility functions of active nodes. We deal with stochastic optimization problem with the setting that utility functions are disturbed by some nonadditive stochastic process. We consider a more challenging situation where the learning method has to be performed only based on a scalar approximation of the utility function, rather than its closed-form expression, so that the typical gradient descent method cannot be applied. This setting is quite realistic when the network is affected by some stochastic and time-varying process, and that each node cannot have the full knowledge of the network states. We propose a distributed optimization algorithm and prove its almost surely convergence to the optimum. Convergence rate is also derived with an additional assumption that the objective function is strongly concave. Numerical results are also presented to justify our claim.