Local Differential Privacy for Decentralized Online Stochastic Optimization With Guaranteed Optimality and Convergence Speed

成果类型:
Article
署名作者:
Chen, Ziqin; Wang, Yongqiang
署名单位:
Clemson University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3519938
发表日期:
2025
页码:
4238-4253
关键词:
Optimization privacy CONVERGENCE noise PROTECTION Differential privacy accuracy Couplings vectors Stochastic processes Decentralized stochastic optimization local differential privacy (LDP) online learning
摘要:
The increasing usage of streaming data has raised significant privacy concerns in decentralized optimization and learning applications. To address this issue, differential privacy (DP) has emerged as a standard approach for privacy protection in decentralized online optimization. Regrettably, existing DP solutions for decentralized online optimization face the dilemma of trading optimization accuracy for privacy. In this article, we propose a local-DP solution for decentralized online optimization/learning that ensures both optimization accuracy and rigorous DP, even in the infinite time horizon. Compared with our prior results that rely on a decaying coupling strength to gradually eliminate the influence of DP noises, the proposed approach allows the coupling strength to be time-invariant, which ensures a high convergence speed. Moreover, different from prior results which rely on precise gradient information to ensure optimality, the proposed approach can ensure convergence in mean square to the optimal solution even in the presence of stochastic gradients. We corroborate the effectiveness of our algorithm using multiple benchmark machine-learning applications, including logistic regression on the mushrooms dataset and convolutional neural network-based image classification on the MNIST and CIFAR-10 datasets.