A Tuning-free Robust and Efficient Approach to High-dimensional Regression

成果类型:
Article
署名作者:
Wang, Lan; Peng, Bo; Bradic, Jelena; Li, Runze; Wu, Yunan
署名单位:
University of Miami; Adobe Systems Inc.; University of California System; University of California San Diego; Pennsylvania Commonwealth System of Higher Education (PCSHE); Pennsylvania State University; Pennsylvania State University - University Park; University of Minnesota System; University of Minnesota Twin Cities
刊物名称:
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
ISSN/ISSBN:
0162-1459
DOI:
10.1080/01621459.2020.1840989
发表日期:
2020
页码:
1700-1714
关键词:
nonconcave penalized likelihood generalized linear-models variable selection parameter selection quantile regression Lasso sparsity regularization Consistency shrinkage
摘要:
We introduce a novel approach for high-dimensional regression with theoretical guarantees. The new procedure overcomes the challenge of tuning parameter selection of Lasso and possesses several appealing properties. It uses an easily simulated tuning parameter that automatically adapts to both the unknown random error distribution and the correlation structure of the design matrix. It is robust with substantial efficiency gain for heavy-tailed random errors while maintaining high efficiency for normal random errors. Comparing with other alternative robust regression procedures, it also enjoys the property of being equivariant when the response variable undergoes a scale transformation. Computationally, it can be efficiently solved via linear programming. Theoretically, under weak conditions on the random error distribution, we establish a finite-sample error bound with a near-oracle rate for the new estimator with the simulated tuning parameter. Our results make useful contributions to mending the gap between the practice and theory of Lasso and its variants. We also prove that further improvement in efficiency can be achieved by a second-stage enhancement with some light tuning. Our simulation results demonstrate that the proposed methods often outperform cross-validated Lasso in various settings.