Transforming literature screening: The emerging role of large language models in systematic reviews
成果类型:
Review
署名作者:
Delgado-Chaves, Fernando M.; Jennings, Matthew J.; Atalaia, Antonio; Wolff, Justus; Horvath, Rita; Mamdouh, Zeinab M.; Baumbach, Jan; Baumbach, Linda
署名单位:
University of Hamburg; Columbia University; Institut National de la Sante et de la Recherche Medicale (Inserm); Sorbonne Universite; University of Cambridge; Maastricht University; Egyptian Knowledge Bank (EKB); Zagazig University; University of Southern Denmark; University of Hamburg; University Medical Center Hamburg-Eppendorf; University of Hamburg
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-9934
DOI:
10.1073/pnas.2411962122
发表日期:
2025-01-14
关键词:
摘要:
Systematic reviews (SR) synthesize evidence- based medical literature, but they involve labor- intensive manual article screening. Large language models (LLMs) can select relevant literature, but their quality and efficacy are still being determined compared to humans. We evaluated the overlap between title- and abstract- based selected articles of 18 different LLMs and human- selected articles for three SR. In the three SRs, 185/4,662, 122/1,741, and 45/66 articles have been selected and considered for full- text screening by two independent reviewers. Due to technical variations and the inability of the LLMs to classify all records, the LLM's considered sample sizes were smaller. However, on average, the 18 LLMs classified 4,294 (min 4,130; max 4,329), 1,539 (min 1,449; max 1,574), and 27 (min 22; max 37) of the titles and abstracts correctly as either included or excluded for the three SRs, respectively. Additional analysis revealed that the definitions of the inclusion criteria and conceptual designs significantly influenced the LLM performances. In conclusion, LLMs can reduce one reviewers workload between 33% and 93% during title and abstract screening. However, the exact formulation of the inclusion and exclusion criteria should be refined beforehand for ideal support of the LLMs.