CLASSIFICATION ACCURACY AS A PROXY FOR TWO-SAMPLE TESTING

成果类型:
Article
署名作者:
Kim, Ilmun; Ramdas, Aaditya; Singh, Aarti; Wasserman, Larry
署名单位:
Carnegie Mellon University
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/20-AOS1962
发表日期:
2021
页码:
411-434
关键词:
permutations PROPERTY
摘要:
When data analysts train a classifier and check if its accuracy is significantly different from chance, they are implicitly performing a two-sample test. We investigate the statistical properties of this flexible approach in the high-dimensional setting. We prove two results that hold for all classifiers in any dimensions: if its true error remains epsilon-better than chance for some epsilon > 0 as d, n -> infinity, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent. To get a finer understanding of the rates of consistency, we study a specialized setting of distinguishing Gaussians with mean-difference S and common (known or unknown) covariance Sigma, when d/n -> c epsilon (0, infinity). We study variants of Fisher's linear discriminant analysis (LDA) such as naive Bayes in a non- trivial regime when epsilon -> 0 (the Bayes classifier has true accuracy approaching 1/2), and contrast their power with corresponding variants of Hotelling's test. Surprisingly, the expressions for their power match exactly in terms of n, d, delta, Sigma, and the LDA approach is only worse by a constant factor, achieving an asymptotic relative efficiency (ARE) of 1/root pi for balanced samples. We also extend our results to high-dimensional elliptical distributions with finite kurtosis. Other results of independent interest include minimax lower bounds, and the optimality of Hotelling's test when d = o(n). Simulation results validate our theory, and we present practical takeaway messages along with natural open problems.