The Anchoring Effect, Algorithmic Fairness, and the Limits of Information Transparency for Emotion Artificial Intelligence
成果类型:
Article
署名作者:
Rhue, Lauren
署名单位:
University System of Maryland; University of Maryland College Park
刊物名称:
INFORMATION SYSTEMS RESEARCH
ISSN/ISSBN:
1047-7047
DOI:
10.1287/isre.2019.0493
发表日期:
2024
页码:
1479-1496
关键词:
FACIAL EXPRESSIONS
RECOGNITION
BIAS
face
摘要:
Emotion artificial intelligence (AI) or emotion recognition AI may systematically vary in its recognition of facial expressions and emotions across demographic groups, creating inconsistencies and disparities in its scoring. This paper explores the extent to which individuals can compensate for these disparities and inconsistencies in emotion AI considering two opposing factors; although humans evolved to recognize emotions, particularly happiness, they are also subject to cognitive biases, such as the anchoring effect. To help understand these dynamics, this study tasks three commercially available emotion AIs and a group of human labelers to identify emotions from faces in two image data sets. The scores generated by emotion AI and human labelers are examined for inference inconsistencies (i.e., misalignment between facial expression and emotion label). The human labelers are also provided with the emotion AI's scores and with measures of its scoring fairness (or lack thereof). We observe that even when human labelers are operating in this context of information transparency, they may still rely on the emotion AI's scores, perpetuating its inconsistencies. Several findings emerge from this study. First, the anchoring effect appears to be moderated by the type of inference inconsistency and is weaker for easier emotion recognition tasks. Second, when human labelers are provided with information transparency regarding the emotion AI's fairness, the effect is not uniform across emotions. Also, there is no evidence that information transparency leads to the selective anchoring necessary to offset emotion AI disparities; in fact, some evidence suggests that information transparency increases human inference inconsistencies. Lastly, the different models of emotion AI are highly inconsistent in their scores, raising doubts about emotion AI more generally. Collectively, these findings provide evidence of the potential limitations of addressing algorithmic bias through individual decisions, even when those individuals are supported with information transparency.
来源URL: