DEFORMED SEMICIRCLE LAW AND CONCENTRATION OF NONLINEAR RANDOM MATRICES FOR ULTRA-WIDE NEURAL NETWORKS

成果类型:
Article
署名作者:
Wang, Zhichao; Zhu, Yizhe
署名单位:
University of California System; University of California San Diego; University of California System; University of California Irvine
刊物名称:
ANNALS OF APPLIED PROBABILITY
ISSN/ISSBN:
1050-5164
DOI:
10.1214/23-AAP2010
发表日期:
2024
页码:
1896-1947
关键词:
sample covariance matrices Limiting Spectral Distribution CONVERGENCE EIGENVALUE regression dimension larger p/n
摘要:
In this paper, we investigate a two -layer fully connected neural network of the form f (X) = 1/root d(1) a(T )sigma (W X ), where X is an element of (d0xn) is a deterministic data matrix, W is an element of R-d1xd0 and a is an element of R-d1 are random Gaussian weights, and sigma is a nonlinear activation function. We study the limiting spectral distributions of two empirical kernel matrices associated with f (X): the empirical conjugate kernel (CK) and neural tangent kernel (NTK), beyond the linear -width regime (d(1 )asymptotic to n). We focus on the ultra-wide regime, where the width d(1) of the first layer is much larger than the sample size n. Under appropriate assumptions on X and sigma, a deformed semicircle law emerges as d(1)/n -> infinity and n -> infinity. We first prove this limiting law for generalized sample covariance matrices with some dependency. To specify it for our neural network model, we provide a nonlinear Hanson-Wright inequality suitable for neural networks with random weights and Lipschitz activation functions. We also demonstrate nonasymptotic concentrations of the empirical CK and NTK around their limiting kernels in the spectral norm, along with lower bounds on their smallest eigenvalues. As an application, we show that random feature regression induced by the empirical kernel achieves the same asymptotic performance as its limiting kernel regression under the ultra -wide regime. This allows us to calculate the asymptotic training and test errors for random feature regression using the corresponding kernel regression.