Are Users Threatened by Credibility Assessment Systems?
成果类型:
Article
署名作者:
Elkins, Aaron C.; Dunbar, Norah E.; Adame, Bradley; Nunamaker, Jay F., Jr.
署名单位:
Imperial College London; University of Oklahoma System; University of Oklahoma - Norman; University of Oklahoma System; University of Oklahoma - Norman; University of Arizona; University of Arizona
刊物名称:
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS
ISSN/ISSBN:
0742-1222
DOI:
10.2753/MIS0742-1222290409
发表日期:
2013
页码:
249-261
关键词:
SELF-AFFIRMATION
perceived ease
TECHNOLOGY
acceptance
摘要:
Despite the improving accuracy of agent-based expert systems, human expert users aided by these systems have not improved their accuracy. Self-affirmation theory suggests that human expert users could be experiencing threat, causing them to act defensively and ignore the system's conflicting recommendations. Previous research has demonstrated that affirming an individual in an unrelated area reduces defensiveness and increases objectivity to conflicting information. Using an affirmation manipulation prior to a credibility assessment task, this study investigated if experts are threatened by counterattitudinal expert system recommendations. For our study, 178 credibility assessment experts from the American Polygraph Association (n = 134) and the European Union's border security agency Frontex (n = 44) interacted with a deception detection expert system to make a deception judgment that was immediately contradicted. Reducing the threat prior to making their judgments did not improve accuracy, but did improve objectivity toward the system. This study demonstrates that human experts are threatened by advanced expert systems that contradict their expertise. As more and more systems increase integration of artificial intelligence and inadvertently assail the expertise and abilities of users, threat and self-evaluative concerns will become an impediment to technology acceptance.