SUPPORT RECOVERY WITHOUT INCOHERENCE: A CASE FOR NONCONVEX REGULARIZATION
成果类型:
Article
署名作者:
Loh, Po-Ling; Wainwright, Martin J.
署名单位:
University of Wisconsin System; University of Wisconsin Madison; University of Wisconsin System; University of Wisconsin Madison; University of California System; University of California Berkeley; University of California System; University of California Berkeley
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/16-AOS1530
发表日期:
2017
页码:
2455-2482
关键词:
nonconcave penalized likelihood
Model Selection Consistency
M-ESTIMATORS
variable selection
sparsity recovery
DANTZIG SELECTOR
regression
Lasso
noisy
optimization
摘要:
We develop a new primal-dual witness proof framework that may be used to establish variable selection consistency and l(infinity)-bounds for sparse regression problems, even when the loss function and regularizer are nonconvex. We use this method to prove two theorems concerning support recovery and l(infinity)-guarantees for a regression estimator in a general setting. Notably, our theory applies to all potential stationary points of the objective and certifies that the stationary point is unique under mild conditions. Our results provide a strong theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, any stationary point can be used to recover the support without requiring the typical incoherence conditions present in l(1)-based methods. We also derive corollaries illustrating the implications of our theorems for composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in-variables linear regression, the negative log likelihood for generalized linear models and the graphical Lasso. We conclude with empirical studies that corroborate our theoretical predictions.