How should the advancement of large language models affect the practice of science?
成果类型:
Article
署名作者:
Binz, Marcel; Alaniz, Stephan; Roskies, Adina; Aczel, Balazs; Bergstrom, Carl T.; Allen, Colin; Schad, Daniel; Wulff, Dirk; West, Jevin D.; Zhang, Qiong; Shiffrin, Richard M.; Gershman, Samuel J.; Popov, Vencislav; Bender, Emily M.; Marelli, Marco; Botvinick, Matthew M.; Akata, Zeynep; Schulz, Eric
署名单位:
Max Planck Society; Technical University of Munich; University of California System; University of California Santa Barbara; Eotvos Lorand University; University of Washington; University of Washington Seattle; Max Planck Society; University of Basel; Rutgers University System; Rutgers University New Brunswick; Indiana University System; Indiana University Bloomington; Harvard University; University of Zurich; University of Milano-Bicocca; Alphabet Inc.; Google Incorporated; DeepMind; University of London; University College London
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-9925
DOI:
10.1073/pnas.2401227121
发表日期:
2025-02-04
关键词:
ai
摘要:
Large language models (LLMs) are being increasingly incorporated into scientific workflows. However, we have yet to fully grasp the implications of this integration. How should the advancement of large language models affect the practice of science? For this opinion piece, we have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging with LLMs is not fundamentally different from working with human collaborators, while Bender et al. argue that LLMs are often misused and overhyped, and that their limitations warrant a focus on more specialized, importance of transparent attribution and responsible use of LLMs. Finally, Botvinick and Gershman advocate that humans should retain responsibility for determining the scientific roadmap. To facilitate the discussion, the four perspectives are complemented with a response from each group. By putting these different perspectives in conversation, we aim to bring attention to important the adoption of LLMs and their impact on both current and future scientific practices.