Differential Privacy via Distributionally Robust Optimization

成果类型:
Article; Early Access
署名作者:
Selvi, Aras; Liu, Huikang; Wiesemann, Wolfram
署名单位:
Imperial College London; Shanghai Jiao Tong University
刊物名称:
OPERATIONS RESEARCH
ISSN/ISSBN:
0030-364X
DOI:
10.1287/opre.2023.0218
发表日期:
2025
关键词:
uncertainty noise
摘要:
In recent years, differential privacy has emerged as the de facto standard for sharing statistics of data sets while limiting the disclosure of private information about the involved individuals. This is achieved by randomly perturbing the statistics to be published, which in turn, leads to a privacy-accuracy trade-off; larger perturbations provide stronger privacy guarantees, but they result in less accurate statistics that offer lower utility to the recipients. Of particular interest are, therefore, optimal mechanisms that provide the highest accuracy for a preselected level of privacy. To date, work in this area has focused on specifying families of perturbations a priori and subsequently proving their asymptotic and/or best-in-class optimality. In this paper, we develop a class of mechanisms that enjoy nonasymptotic and unconditional optimality guarantees. To this end, we formulate the mechanism design problem as an infinite-dimensional distributionally robust optimization problem. We show that the problem affords a strong dual, and we exploit this duality to develop converging hierarchies of finite-dimensional upper- and lower-bounding problems. Our upper (primal) bounds correspond to implementable perturbations whose suboptimality can be bounded by our lower (dual) bounds. Both bounding problems can be solved within seconds via cutting-plane techniques that exploit the inherent problem structure. Our numerical experiments demonstrate that our perturbations can outperform the previously best results from the literature on artificial as well as standard benchmark problems.
来源URL: