BEST SUBSET SELECTION VIA A MODERN OPTIMIZATION LENS
成果类型:
Article
署名作者:
Bertsimas, Dimitris; King, Angela; Mazumder, Rahul
署名单位:
Massachusetts Institute of Technology (MIT); Massachusetts Institute of Technology (MIT)
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/15-AOS1388
发表日期:
2016
页码:
813-852
关键词:
nonconcave penalized likelihood
variable selection
REGRESSION SHRINKAGE
Lasso
sparsity
RECOVERY
persistence
摘要:
In the period 1991-2015, algorithmic advances in Mixed Integer Optimization (MIO) coupled with hardware improvements have resulted in an astonishing 450 billion factor speedup in solving MIO problems. We present a MIO approach for solving the classical best subset selection problem of choosing k out of p features in linear regression given n observations. We develop a discrete extension of modern first-order continuous optimization methods to find high quality feasible solutions that we use as warm starts to a MIO solver that finds provably optimal solutions. The resulting algorithm ( a) provides a solution with a guarantee on its suboptimality even if we terminate the algorithm early, (b) can accommodate side constraints on the coefficients of the linear regression and (c) extends to finding best subset solutions for the least absolute deviation loss function. Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with n in the 1000s and p in the 100s in minutes to provable optimality, and finds near optimal solutions for n in the 100s and p in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than Lasso and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.