Wasserstein Robust Classification with Fairness Constraints
成果类型:
Article
署名作者:
Wang, Yijie; Nguyen, Viet Anh; Hanasusanto, Grani A.
署名单位:
Tongji University; Chinese University of Hong Kong; University of Illinois System; University of Illinois Urbana-Champaign
刊物名称:
M&SOM-MANUFACTURING & SERVICE OPERATIONS MANAGEMENT
ISSN/ISSBN:
1523-4614
DOI:
10.1287/msom.2022.0230
发表日期:
2024
关键词:
programming
stochastic methods
摘要:
Problem definition: : Data analytics models and machine learning algorithms are increasingly deployed to support consequential decision -making processes, from deciding which applicants will receive job offers and loans to university enrollments and medical interventions. However, recent studies show these models may unintentionally amplify human bias and yield significant unfavorable decisions to specific groups. Methodology/ results: : We propose a distributionally robust classification model with a fairness constraint that encourages the classifier to be fair in the equality of opportunity criterion. We use a type-infinity infinity Wasserstein ambiguity set centered at the empirical distribution to represent distributional uncertainty and derive a conservative reformulation for the worst -case equal opportunity unfairness measure. We show that the model is equivalent to a mixed binary conic optimization problem, which standard off -the -shelf solvers can solve. We propose a convex, hinge -loss -based model for large problem instances whose reformulation does not incur binary variables to improve scalability. Moreover, we also consider the distributionally robust learning problem with a generic ground transportation cost to hedge against the label and sensitive attribute uncertainties. We numerically examine the performance of our proposed models on five real -world data sets related to individual analysis. Compared with the state-of-the-art methods, our proposed approaches significantly improve fairness with negligible loss of predictive accuracy in the testing data set. Managerial implications: : Our paper raises awareness that bias may arise when predictive models are used in service and operations. It generally comes from human bias, for example, imbalanced data collection or low sample sizes, and is further amplified by algorithms. Incorporating fairness constraints and the distributionally robust optimization (DRO) scheme is a powerful way to alleviate algorithmic biases.
来源URL: