Locally Differentially Private Distributed Online Learning With Guaranteed Optimality
成果类型:
Article
署名作者:
Chen, Ziqin; Wang, Yongqiang
署名单位:
Clemson University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3482977
发表日期:
2025
页码:
2521-2536
关键词:
Distributed databases
Differential privacy
accuracy
privacy
optimization
noise
vectors
PROTECTION
Heuristic algorithms
Machine Learning
Distributed online optimization and learning
instantaneous regret
local differential privacy (LDP)
摘要:
Distributed online learning is gaining increased traction due to its unique ability to process large-scale datasets and streaming data. To address the growing public awareness and concern about privacy protection, plenty of algorithms have been proposed to enable differential privacy in distributed online optimization and learning. However, these algorithms often face the dilemma of trading learning accuracy for privacy. By exploiting the unique characteristics of online learning, this article proposes an approach that tackles the dilemma and ensures both differential privacy and learning accuracy in distributed online learning. More specifically, while ensuring a diminishing expected instantaneous regret, the approach can simultaneously ensure a finite cumulative privacy budget, even in the infinite time horizon. To cater for the fully distributed setting, we adopt the local differential-privacy framework, which avoids the reliance on a trusted data curator that is required in the classic centralized (global) differential-privacy framework. To the best of our knowledge, this is the first algorithm that successfully ensures both rigorous local differential privacy and learning accuracy. The effectiveness of the proposed algorithm is evaluated using machine learning tasks, including logistic regression on the the mushrooms datasets and convolutional neural network-based image classification on the MNIST and CIFAR-10 datasets.