Human-Algorithmic Bias: Source, Evolution, and Impact

成果类型:
Article; Early Access
署名作者:
Hu, Xiyang; Huang, Yan; Li, Beibei; Lu, Tian
署名单位:
Arizona State University; Arizona State University-Tempe; Carnegie Mellon University; Carnegie Mellon University
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.2022.03862
发表日期:
2025
关键词:
algorithmic bias human bias Machine Learning structural modeling microlending
摘要:
Prior work on human-algorithmic bias has seen difficulty in empirically identifying the underlying mechanisms of bias because in a typical one-time decision-making scenario, different mechanisms generate the same patterns of observable decisions. In this study, leveraging a unique repeat decision-making setting in a high-stakes microlending context, we aim to uncover the underlying source, evolution dynamics, and associated impacts of bias. We first develop a structural econometric model of the decision dynamics to understand the source and evolution of bias in human evaluators in microloan granting. We find that both preference-based and belief-based biases exist in human decisions and are in favor of female applicants. Our counterfactual simulations show that the elimination of either of the two biases improves the fairness in financial resource allocation as well as the platform profits. The profit improvement mainly stems from the increased approval probability for male borrowers, especially those who would eventually pay back loans. Furthermore, to examine how human biases evolve when being inherited by machine learning (ML) algorithms, we train state-of-the-art ML algorithms for default risk prediction on both real-world data sets with human biases encoded within and counterfactual data sets with human biases partially or fully removed. We find that even fairnessunaware ML algorithms can reduce bias in human decisions. Interestingly, although removing both types of human bias from the training data can further improve ML fairness, the fairness-enhancing effects vary significantly between new and repeat applicants. Based on our findings, we discuss how to reduce decision bias most effectively in a human-ML pipeline.