Heuristics of instability and stabilization in model selection
成果类型:
Article
署名作者:
Breiman, L
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
发表日期:
1996
页码:
2350-2383
关键词:
regression
摘要:
In model selection, usually a ''best'' predictor is chosen from a collection {<(mu)over cap>(., s)} of predictors where <(mu)over cap>(., s) is the minimum least-squares predictor in a collection U-s of predictors. Here s is a complexity parameter; that is, the smaller s, the lower dimensional/smoother the models in U-s. If L is the data used to derive the sequence {<(mu)over cap>(., s)}, the procedure is called unstable if a small change in L can cause large changes in {<(mu)over cap>(., s)}. With a crystal ball, one could pick the predictor in {mu(.,s)} having minimum prediction error. Without prescience, one uses test sets, cross-validation and so forth. The difference in prediction error between the crystal ball selection and the statistician's choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complex comparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence {<(mu)over cap>(., s)} and then averaging over many such predictor sequences.