AI-AI bias: Large language models favor communications generated by large language models
成果类型:
Article
署名作者:
Laurito, Walter; Davis, Benjamin; Grietzer, Peli; Gavenciak, Tomas; Bohm, Ada; Kulveit, Jan
署名单位:
Charles University Prague
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-10084
DOI:
10.1073/pnas.2415697122
发表日期:
2025-08-05
关键词:
racial-discrimination
摘要:
Are large language models (LLMs) biased in favor of communications produced by LLMs, leading to possible antihuman discrimination? Using a classical experimental design inspired by employment discrimination studies, we tested widely used LLMs, including GPT-3.5, GPT-4 and a selection of recent open-weight models in binary choice scenarios. These involved LLM-based assistants selecting between goods (the goods we study include consumer products, academic papers, and film-viewings) described either by humans or LLMs. Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options. This suggests the possibility of future AI systems implicitly discriminating against humans as a class, giving AI agents and AI-assisted humans an unfair advantage.