Differentially Private ADMM for Regularized Consensus Optimization
成果类型:
Article
署名作者:
Cao, Xuanyu; Zhang, Junshan; Poor, H. Vincent; Tian, Zhi
署名单位:
University of Illinois System; University of Illinois Urbana-Champaign; Arizona State University; Arizona State University-Tempe; Princeton University; George Mason University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2020.3022856
发表日期:
2021
页码:
3718-3725
关键词:
privacy
cost function
CONVERGENCE
Convex functions
Machine Learning
admm
Differential privacy
distributed optimization
摘要:
Due to its broad applicability in machine learning, resource allocation, and control, the alternating direction method of multipliers (ADMM) has been extensively studied in the literature. The message exchange of the ADMM in multiagent optimization may reveal sensitive information of agents, which can be overheard by malicious attackers. This drawback hinders the application of the ADMM to privacy-aware multiagent systems. In this article, we consider consensus optimization with regularization, in which the cost function of each agent contains private sensitive information, e.g., private data in machine learning, and private usage patterns in resource allocation. We develop a variant of the ADMM that can preserve agents' differential privacy by injecting noise into the public signals broadcast to the agents. We derive conditions on the magnitudes of the added noise under which the designated level of differential privacy can be achieved. Furthermore, the convergence properties of the proposed differentially private ADMM are analyzed under the assumption that the cost functions are strongly convex with Lipschitz continuous gradients, and the regularizer has smooth gradients or bounded subgradients. We find that to attain the best convergence performance given a certain privacy level, the magnitude of the injected noise should decrease as the algorithm progresses. Additionally, the choice of the number of iterations should balance the tradeoff between the convergence, and the privacy leakage of the ADMM, which is explicitly characterized by the derived upper bounds on convergence performance. Finally, numerical results are presented to corroborate the efficacy of the proposed algorithm. In particular, we apply the proposed algorithm to multiagent linear-quadratic control with private information to showcase its merit in control applications.