Toward a more credible assessment of the credibility of science by many- analyst studies

成果类型:
Article
署名作者:
Auspurg, Katrin; Bruederl, Josef
署名单位:
University of Munich
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-14879
DOI:
10.1073/pnas.2404035121
发表日期:
2024-09-17
关键词:
metaanalysis replication uncertainty immigration MODEL
摘要:
We discuss a relatively new meta- scientific research design: many- analyst studies that attempt to assess the replicability and credibility of research based on largescale observational data. In these studies, a large number of analysts try to answer the same research question using the same data. The key idea is the greater the variation in results, the greater the uncertainty in answering the research question and, accordingly, the lower the credibility of any individual research finding. Compared to individual replications, the large crowd of analysts allows for a more systematic investigation of uncertainty and its sources. However, many- analyst studies are also resourceintensive, and there are some doubts about their potential to provide credible assessments. We identify three issues that any many- analyst study must address: 1) identifying the source of variation in the results; 2) providing an incentive structure similar to that of standard research; and 3) conducting a proper meta- analysis of the results. We argue that some recent many- analyst studies have failed to address these issues satisfactorily and have therefore provided an overly pessimistic assessment of the credibility of science. We also provide some concrete guidance on how future many- analyst studies could provide a more constructive assessment.