STATISTICAL METHODS FOR REPLICABILITY ASSESSMENT
成果类型:
Article
署名作者:
Hung, Kenneth; Fithian, William
署名单位:
University of California System; University of California Berkeley; University of California System; University of California Berkeley
刊物名称:
ANNALS OF APPLIED STATISTICS
ISSN/ISSBN:
1932-6157
DOI:
10.1214/20-AOAS1336
发表日期:
2020
页码:
1063-1087
关键词:
false discovery rate
science-wise false
confidence-intervals
publication bias
inference
POWER
摘要:
Large-scale replication studies like the Reproducibility Project: Psychology (RP:P) provide invaluable systematic data on scientific replicability, but most analyses and interpretations of the data fail to agree on the definition of replicability and disentangle the inexorable consequences of known selection bias from competing explanations. We discuss three concrete definitions of replicability based on: (1) whether published findings about the signs of effects are mostly correct, (2) how effective replication studies are in reproducing whatever true effect size was present in the original experiment and (3) whether true effect sizes tend to diminish in replication. We apply techniques from multiple testing and postselection inference to develop new methods that answer these questions while explicitly accounting for selection bias. Our analyses suggest that the RP:P dataset is largely consistent with publication bias due to selection of significant effects. The methods in this paper make no distributional assumptions about the true effect sizes.
来源URL: