FAST LEARNING RATES IN STATISTICAL INFERENCE THROUGH AGGREGATION
成果类型:
Article
署名作者:
Audibert, Jean-Yves
署名单位:
Universite Gustave-Eiffel; Institut Polytechnique de Paris; Ecole Nationale des Ponts et Chaussees; Universite PSL; Ecole Normale Superieure (ENS); Centre National de la Recherche Scientifique (CNRS); Inria
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/08-AOS623
发表日期:
2009
页码:
1591-1646
关键词:
lower bounds
prediction
complexity
摘要:
We develop minimax optimal risk bounds for the general learning task consisting in predicting as well as the best function in a reference set g up to the smallest possible additive term, called the convergence rate. When the reference set is finite and when n denotes the size of the training data, we provide minimax convergence rates of the form C(log|g|/n)(nu) with tight evaluation of the positive constant C and with exact 0 < nu <= 1, the latter value depending on the convexity of the loss function and on the level of noise in the output distribution. The risk upper bounds are based on a sequential randomized algorithm, which at each step concentrates on functions having both low risk and low variance with respect to the previous step prediction function. Our analysis puts forward the links between the probabilistic and worst-case viewpoints, and allows to obtain risk bounds unachievable with the standard statistical learning approach. One of the key ideas of this work is to use probabilistic inequalities with respect to appropriate (Gibbs) distributions on the prediction function space instead of using them with respect to the distribution generating the data. The risk lower bounds are based on refinements of the Assouad lemma taking particularly into account the properties of the loss function. Our key example to illustrate the upper and lower bounds is to consider the L-q-regression setting for which an exhaustive analysis of the convergence rates is given while q ranges in [1; + infinity].
来源URL: