Cross-Validation: What Does It Estimate and How Well Does It Do It?
成果类型:
Article
署名作者:
Bates, Stephen; Hastie, Trevor; Tibshirani, Robert
署名单位:
University of California System; University of California Berkeley; University of California System; University of California Berkeley; Stanford University; Stanford University
刊物名称:
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
ISSN/ISSBN:
0162-1459
DOI:
10.1080/01621459.2023.2197686
发表日期:
2024
页码:
1434-1445
关键词:
model selection
prediction error
variance
摘要:
Cross-validation is a widely used technique to estimate prediction error, but its behavior is complex and not fully understood. Ideally, one would like to think that cross-validation estimates the prediction error for the model at hand, fit to the training data. We prove that this is not the case for the linear model fit by ordinary least squares; rather it estimates the average prediction error of models fit on other unseen training sets drawn from the same population. We further show that this phenomenon occurs for most popular estimates of prediction error, including data splitting, bootstrapping, and Mallow's C-p. Next, the standard confidence intervals for prediction error derived from cross-validation may have coverage far below the desired level. Because each data point is used for both training and testing, there are correlations among the measured accuracies for each fold, and so the usual estimate of variance is too small. We introduce a nested cross-validation scheme to estimate this variance more accurately, and show empirically that this modification leads to intervals with approximately correct coverage in many examples where traditional cross-validation intervals fail. Lastly, our analysis also shows that when producing confidence intervals for prediction accuracy with simple data splitting, one should not refit the model on the combined data, since this invalidates the confidence intervals. for this article are available online.