When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples
成果类型:
Article
署名作者:
Kaufman, S.; Rosset, S.
署名单位:
Tel Aviv University
刊物名称:
BIOMETRIKA
ISSN/ISSBN:
0006-3444
DOI:
10.1093/biomet/asu034
发表日期:
2014
页码:
771784
关键词:
cross-validation
regression
selection
Lasso
MODEL
error
摘要:
Regularization aims to improve prediction performance by trading an increase in training error for better agreement between training and prediction errors, which is often captured through decreased degrees of freedom. In this paper we give examples which show that regularization can increase the degrees of freedom in common models, including the lasso and ridge regression. In such situations, both training error and degrees of freedom increase, making the regularization inherently without merit. Two important scenarios are described where the expected reduction in degrees of freedom is guaranteed: all symmetric linear smoothers and convex constrained linear regression models like ridge regression and the lasso, when compared to unconstrained linear regression.