Mirror, Mirror on the Wall: Algorithmic Assessments, Transparency, and Self-Fulfilling Prophecies
成果类型:
Article
署名作者:
Bauer, Kevin; Gill, Andrej
署名单位:
University of Mannheim; Johannes Gutenberg University of Mainz
刊物名称:
INFORMATION SYSTEMS RESEARCH
ISSN/ISSBN:
1047-7047
DOI:
10.1287/isre.2023.1217
发表日期:
2024
关键词:
moral wiggle room
intention-based reciprocity
social norms
illusory preference
TASK COMPLEXITY
E-commerce
trust
online
recommendations
aversion
摘要:
Predictive algorithmic scores can significantly impact the lives of assessed individuals by shaping decisions of organizations and institutions that affect them, for example, influencing the hiring prospects of job applicants or the release of defendants on bail. To better protect people and provide them the opportunity to appeal their algorithmic assessments, data privacy advocates and regulators increasingly push for disclosing the scores and their use in decision-making processes to scored individuals. Although inherently important, the response of scored individuals to such algorithmic transparency is understudied and therefore demands further research. Inspired by psychological and economic theories of information processing, we aim to fill this gap. We conducted a comprehensive experimental study with five treatment conditions to explore how and why disclosing the use of algorithmic scoring processes to (involuntarily) scored individuals affects their behaviors. Our results provide strong evidence that the disclosure of fundamentally erroneous algorithmic scores evokes selffulfilling prophecies that endogenously steer the behavior of scored individuals toward their assessment, enabling algorithms to help produce the world they predict. Occurring selffulfilling prophecies are consistent with an anchoring effect and the exploitation of available moral wiggle room. Because scored individuals interpret others' motives for overriding human expert and algorithmic scores differently, self-fulfilling prophecies occur in part only when disclosing algorithmic scores. Our results emphasize that isolated transparency measures can have considerable side effects with noticeable implications for the development of automation bias, the occurrence of feedback loops, and the design of transparency regulations.
来源URL: