Regularized Diffusion Adaptation via Conjugate Smoothing

成果类型:
Article
署名作者:
Vlaski, Stefan; Vandenberghe, Lieven; Sayed, Ali H.
署名单位:
Swiss Federal Institutes of Technology Domain; Ecole Polytechnique Federale de Lausanne; University of California System; University of California Los Angeles
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3081073
发表日期:
2022
页码:
2343-2358
关键词:
Smoothing methods Aggregates Eigenvalues and eigenfunctions cost function Pareto optimization Linear matrix inequalities Electrical engineering Diffusion strategy distributed optimization nonsmooth regularizer proximal diffusion proximal operator regularized diffusion smoothing
摘要:
The purpose of this article is to develop and study a decentralized strategy for Pareto optimization of an aggregate cost consisting of regularized risks. Each risk is modeled as the expectation of some loss function with unknown probability distribution, while the regularizers are assumed deterministic, but are not required to be differentiable or even continuous. The individual, regularized, cost functions are distributed across a strongly connected network of agents, and the Pareto optimal solution is sought by appealing to a multiagent diffusion strategy. To this end, the regularizers are smoothed by means of infimal convolution, and it is shown that the Pareto solution of the approximate smooth problem can be made arbitrarily close to the solution of the original nonsmooth problem. Performance bounds are established under conditions that are weaker than assumed before in the literature and, hence, applicable to a broader class of adaptation and learning problems.