Scaling language model size yields diminishing returns for single-message political persuasion

成果类型:
Article
署名作者:
Hackenburg, Kobi; Tappin, Ben M.; Roettger, Paul; Hale, Scott A.; Bright, Jonathan; Margetts, Helen
署名单位:
University of Oxford; Alan Turing Institute; University of London; London School Economics & Political Science; Bocconi University
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-14566
DOI:
10.1073/pnas.2413443122
发表日期:
2025-03-11
关键词:
摘要:
Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 10 US political issues from 24 language models spanning several orders of magnitude in size. We then deploy these messages in a large-scale randomized survey experiment (N = 25,982) to estimate the persuasive capability of each model. Our findings are twofold. First, we find evidence that model persuasiveness is characterized by sharply diminishing returns, such that current frontier models are only slightly more persuasive than models smaller in size by an order of magnitude or more. Second, we find that the association between language model size and persuasiveness shrinks toward zero and is no longer statistically significant once we adjust for mere task completion (coherence, staying on topic), a pattern that highlights task completion as a potential mediator of larger models' persuasive advantage. Given that current frontier models are already at ceiling on this task completion metric in our setting, taken together, our results suggest that further scaling model size may not much increase the persuasiveness of static LLM-generated political messages.