Unintended Consequences of Disclosing Recommendations by Artificial Intelligence versus Humans on True and Fake News Believability and Engagement

成果类型:
Article
署名作者:
Ma, Hanzhuo (Vivian); Huang, Wei (Wayne); Dennis, Alan R.
署名单位:
Deakin University; Southern University of Science & Technology; Indiana University System; IU Kelley School of Business; Indiana University Bloomington
刊物名称:
JOURNAL OF MANAGEMENT INFORMATION SYSTEMS
ISSN/ISSBN:
0742-1222
DOI:
10.1080/07421222.2024.2376381
发表日期:
2024
页码:
616-644
关键词:
normative social-influence consumer trust false news media agents performance acceptance management expertise strangers
摘要:
In an attempt to combat fake news, policymakers in many countries are considering mandating the disclosure of artificial intelligence (AI) recommendations of social media news articles. We used two randomized controlled experiments to investigate the effects of labeling social media news stories as recommended by AI. Our results show that an AI recommendation reduced belief in true news articles and had no material effect on belief in fake news. In contrast, a recommendation by an expert increased belief in true news articles, but had no effect for fake news articles. A friend recommendation had no effect for fake articles and inconsistent effects for true articles. Belief that an article was true led to news engagement (liking, commenting, sharing), but an AI recommendation weakened this relationship, making confirmation bias the primary factor influencing engagement. The trustworthiness of the recommender only partially explained these effects, which suggests that there are other theoretical factors at work. This study reveals that the explicit labeling of AI curation of social media news stories does not help combat fake news, but instead is likely to backfire and have unintended negative effects by decreasing the belief of and engagement with true news articles.