FAILURES OF FAIRNESS IN AUTOMATION OF HUMAN-ML AUGMENTATION
成果类型:
Article
署名作者:
Teodorescu, Mike H. M.; Morse, Lily; Awwad, Yazeed; Kane, Gerald C.
署名单位:
Boston College; West Virginia University; Massachusetts Institute of Technology (MIT)
刊物名称:
MIS QUARTERLY
ISSN/ISSBN:
0276-7783
DOI:
10.25300/MISQ/2021/16535
发表日期:
2021
页码:
1483-1500
关键词:
explanation facilities
recommendation agents
TECHNOLOGY
systems
IMPACT
complexity
FRAMEWORK
security
interdependence
uncertainty
摘要:
Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fair-ness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human-ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human-ML augmentation. We introduce a typology of augmentation for fairness consisting of four quadrants: reactive oversight, proactive oversight, informed reliance, and supervised reliance. We identify significant intersections with previous IS research and distinct managerial approaches to fairness for each quadrant. Several potential research questions emerge from fundamental differences between ML tools trained on data and traditional IS built with code. IS researchers may discover that the differences of ML tools undermine some of the fundamental assumptions upon which classic IS theories and concepts rest. ML may require massive rethinking of significant portions of the corpus of IS research in light of these differences, representing an exciting frontier for research into human-ML augmentation in the years ahead that IS researchers should embrace.
来源URL: