A PRECISE HIGH-DIMENSIONAL ASYMPTOTIC THEORY FOR BOOSTING AND MINIMUM-l1-NORM INTERPOLATED CLASSIFIERS

成果类型:
Article
署名作者:
Liang, Tengyuan; Sur, Pragya
署名单位:
University of Chicago; Harvard University
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/22-AOS2170
发表日期:
2022
页码:
1669-1695
关键词:
logistic-regression generalization error robust regression margin EXISTENCE CONVERGENCE Consistency algorithms prediction models
摘要:
This paper establishes a precise high-dimensional asymptotic theory for boosting on separable data, taking statistical and computational perspectives. We consider a high-dimensional setting where the number of features (weak learners) p scales with the sample size n, in an overparametrized regime. Under a class of statistical models, we provide an exact analysis of the generalization error of boosting when the algorithm interpolates the training data and maximizes the empirical l(1)-margin. Further, we explicitly pin down the relation between the boosting test error and the optimal Bayes error, as well as the proportion of active features at interpolation (with zero initialization). In turn, these precise characterizations answer certain questions raised in (Neural Comput. 11 (1999) 1493-1517; Ann. Statist. 26 (1998) 1651-1686) surrounding boosting, under assumed data generating processes. At the heart of our theory lies an in-depth study of the maximum-l(1)-margin, which can be accurately described by a new system of nonlinear equations; to analyze this margin, we rely on Gaussian comparison techniques and develop a novel uniform deviation argument. Our statistical and computational arguments can handle (1) any finite-rank spiked covariance model for the feature distribution and (2) variants of boosting corresponding to general l(q)-geometry, q is an element of [1, 2]. As a final component, via the Lindeberg principle, we establish a universality result showcasing that the scaled l(1)-margin (asymptotically) remains the same, whether the covariates used for boosting arise from a nonlinear random feature model or an appropriately linearized model with matching moments.