Efficient Distributed Learning over Decentralized Networks with Convoluted Support Vector Machine
成果类型:
Article; Early Access
署名作者:
Chen, Canyi; Qiao, Nan; Zhu, Liping
署名单位:
University of Michigan System; University of Michigan; Renmin University of China; Renmin University of China
刊物名称:
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
ISSN/ISSBN:
0162-1459
DOI:
10.1080/01621459.2025.2550671
发表日期:
2025
关键词:
nonconcave penalized likelihood
variable selection
quantile regression
regularization
optimization
CONVERGENCE
algorithm
摘要:
This article concerns efficiently classifying high-dimensional data over decentralized networks. Penalized support vector machines (SVMs) are widely used for high-dimensional classification tasks. However, the double nonsmoothness of the objective function poses significant challenges in developing efficient decentralized learning methods. Existing approaches frequently suffer from slow, sublinear convergence rates. To address this issue, we consider a convolution-based smoothing technique for the nonsmooth hinge loss function. This results in a loss function that is both convex and smooth. We then develop an efficient generalized alternating direction method of multipliers (ADMM) algorithm to solve penalized SVMs in decentralized networks. Our theoretical contributions are twofold. First, we demonstrate that our generalized ADMM algorithm achieves linear convergence with a straightforward implementation. Second, we show that, after a sufficient number of ADMM iterations, the final sparse estimate attains the optimal statistical convergence rate and accurately recovers the true support of the underlying parameters. Extensive numerical experiments on both synthetic and real-world datasets validate our theoretical findings. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.