SPARSE RECOVERY IN CONVEX HULLS VIA ENTROPY PENALIZATION

成果类型:
Article
署名作者:
Koltchinskii, Vladimir
署名单位:
University System of Georgia; Georgia Institute of Technology
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/08-AOS621
发表日期:
2009
页码:
1332-1359
关键词:
oracle inequalities statistical estimation reconstruction Lasso
摘要:
Let (X. Y) be a random couple in S x T with unknown distribution P and (X-1, Y-1),..., (X-n, Y-n,) be i.i.d. copies of (X, Y). Denote P-n the empirical distribution of (X-1, Y-1),..., (X-n, Y-n). Let h(1),..., h(N): S bar right arrow [-1, 1] be a dictionary that consists of N functions. For lambda is an element of R-N, denote f(lambda) := Sigma(N)(j=1) lambda(j)h(j). Let l: T x R bar right arrow R be a given loss function and suppose it is convex with respect to the second variable. Let (l center dot f)(x, y) := l(y; f (x)). Finally, let &ULAMBDA subset of R-N be the simplex of all probability distributions on {1,..., N}. Consider the following penalized empirical risk minimization problem (lambda) over cap (epsilon) := argmin(lambda is an element of Lambda)[ P-n(l center dot f(lambda)) + epsilon Sigma(N)(j=1)lambda(j)log lambda(j)] along with its distribution dependent version lambda(epsilon) := argmin(lambda is an element of Lambda)[ P(l center dot f(lambda)) + epsilon Sigma(N)(j=1)lambda(j)log lambda(j)], where epsilon >= 0 is a regularization parameter. It is proved that the approximate sparsity of lambda(epsilon) implies the approximate sparsity of (lambda) over cap (epsilon) and the impact of sparsity oil bounding the excess risk of the empirical solution is explored. Similar results are also discussed in the case of entropy penalized density estimation.