Human-Algorithm Collaboration with Private Information: Naïve Advice-Weighting Behavior and Mitigation
成果类型:
Article; Early Access
署名作者:
Balakrishnan, Maya; Ferreira, Kris Johnson; Tong, Jordan
署名单位:
University of Texas System; University of Texas Dallas; Harvard University; University of Wisconsin System; University of Wisconsin Madison
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.2022.03850
发表日期:
2025
关键词:
Human-Algorithm Interaction
forecasting
behavioral operations
algorithm transparency
advice taking
摘要:
Even if algorithms make better predictions than humans on average, humans may sometimes have private information that an algorithm does not have access to that can improve performance. How can we help humans effectively use and adjust recommendations made by algorithms in such situations? When deciding whether and how to override an algorithm's recommendations, we hypothesize that people are biased toward following na & iuml;ve advice-weighting (NAW) behavior; they take a weighted average between their own prediction and the algorithm's prediction, with a constant weight across prediction instances regardless of whether they have valuable private information. This leads to humans overadhering to the algorithm's predictions when their private information is valuable and underadhering when it is not. In an online experiment where participants were tasked with making demand predictions for 20 products while having access to an algorithm's predictions, we confirm this bias toward NAW and find that it leads to a 20%-61% increase in prediction error. In a second experiment, we find that feature transparency-even when the underlying algorithm is a black box-helps users more effectively discriminate how to deviate from algorithms, resulting in a 25% reduction in prediction error. We make further improvements in a third experiment via an intervention designed to move users away from advice weighting and instead, use only their private information to inform deviations, leading to a 34% reduction in prediction error.