FAST GLOBAL CONVERGENCE OF GRADIENT METHODS FOR HIGH-DIMENSIONAL STATISTICAL RECOVERY
成果类型:
Article
署名作者:
Agarwal, Alekh; Negahban, Sahand; Wainwright, Martin J.
署名单位:
University of California System; University of California Berkeley; Massachusetts Institute of Technology (MIT); University of California System; University of California Berkeley
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/12-AOS1032
发表日期:
2012
页码:
2452-2482
关键词:
Matrix decomposition
regression
selection
shrinkage
sparsity
Lasso
noisy
rates
摘要:
Many statistical M-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the ambient dimension d to grow with (and possibly exceed) the sample size n. Our theory identifies conditions under which projected gradient descent enjoys globally linear convergence up to the statistical precision of the model, meaning the typical distance between the true unknown parameter theta* and an optimal solution (theta) over cap. By establishing these conditions with high probability for numerous statistical models, our analysis applies to a wide range of M-estimators, including sparse linear regression using Lasso; group Lasso for block sparsity; log-linear models with regularization; low-rank matrix recovery using nuclear norm regularization; and matrix decomposition using a combination of the nuclear and l(1) norms. Overall, our analysis reveals interesting connections between statistical and computational efficiency in high-dimensional estimation.