ANALYSIS OF A TWO-LAYER NEURAL NETWORK VIA DISPLACEMENT CONVEXITY
成果类型:
Article
署名作者:
Javanmard, Adel; Mondelli, Marco; Montanari, Andrea
署名单位:
University of Southern California; Institute of Science & Technology - Austria; Stanford University; Stanford University
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/20-AOS1945
发表日期:
2020
页码:
3619-3642
关键词:
porous-medium equation
entropy dissipation
approximation
propagation
regression
摘要:
Fitting a function by using linear combinations of a large number N of simple components is one of the most fruitful ideas in statistical learning. This idea lies at the core of a variety of methods, from two-layer neural networks to kernel regression, to boosting. In general, the resulting risk minimization problem is nonconvex and is solved by gradient descent or its variants. Unfortunately, little is known about global convergence properties of these approaches. Here, we consider the problem of learning a concave function f on a compact convex domain Omega subset of R-d, using linear combinations of bump-like components (neurons). The parameters to be fitted are the centers of N bumps, and the resulting empirical risk minimization problem is highly nonconvex. We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over Omega. Further, when the bump width delta tends to 0, this gradient flow has a limit which is a viscous porous medium equation. Remarkably, the cost function optimized by this gradient flow exhibits a special property known as displacement convexity, which implies exponential convergence rates for N -> infinity, delta -> 0. Surprisingly, this asymptotic theory appears to capture well the behavior for moderate values of delta, N. Explaining this phenomenon, and understanding the dependence on delta, N in a quantitative manner remains an outstanding challenge.