How AI can distort human beliefs

成果类型:
Editorial Material
署名作者:
Kidd, Celeste; Birhane, Abeba
署名单位:
University of California System; University of California Berkeley; Trinity College Dublin
刊物名称:
SCIENCE
ISSN/ISSBN:
0036-11505
DOI:
10.1126/science.adi0248
发表日期:
2023-06-01
页码:
1222-1223
关键词:
摘要:
Models can convey biases and false information to users Individual humans form their beliefs by sampling a small subset of the available data in the world. Once those beliefs are formed with high certainty, they can become stubborn to revise. Fabrication and bias in generative artificial intelligence (AI) models are established phenomena that can occur as part of regular system use, in the absence of any malevolent forces seeking to push bias or disinformation. However, transmission of false information and bias from these models to people has been prominently absent from the discourse. Overhyped, unrealistic, and exaggerated capabilities permeate how generative AI models are presented, which contributes to the popular misconception that these models exceed human-level reasoning and exacerbates the risk of transmission of false information and negative stereotypes to people.