WASSERSTEIN CONVERGENCE IN BAYESIAN AND FREQUENTIST DECONVOLUTION MODELS
成果类型:
Article
署名作者:
Rousseau, Judith; Scricciolo, Catia
署名单位:
University of Oxford; University of Verona
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/24-AOS2413
发表日期:
2024
页码:
1691-1715
关键词:
posterior contraction rates
DENSITY-ESTIMATION
maximum-likelihood
sharp optimality
Minimax Rates
mixtures
distributions
estimator
Finite
error
摘要:
We study the multivariate deconvolution problem of recovering the distribution of a signal from independent and identically distributed observations additively contaminated with random errors (noise) from a known distribution. For errors with independent coordinates having ordinary smooth densities, we derive an inversion inequality relating the L-1-Wasserstein distance between two distributions of the signal to the L-1-distance between the corresponding mixture densities of the observations. This smoothing inequality outperforms existing inversion inequalities. As an application of the inversion inequality to the Bayesian framework, we consider L-1-Wasserstein deconvolution with Laplace noise in dimension one using a Dirichlet process mixture of normal densities as a prior measure on the mixing distribution (or distribution of the signal). We construct an adaptive approximation of the sampling density by convolving the Laplace density with a well-chosen mixture of normal densities and show that the posterior measure concentrates around the sampling density at a nearly minimax rate, up to a log-factor, in the L-1-distance. The same posterior law is also shown to automatically adapt to the unknown Sobolev regularity of the mixing density, thus leading to a new Bayesian adaptive estimation procedure for mixing distributions with regular densities under the L-1-Wasserstein metric. We illustrate utility of the inversion inequality also in a frequentist setting by showing that an appropriate isotone approximation of the classical kernel deconvolution estimator attains the minimax rate of convergence for L-1-Wasserstein deconvolution in any dimension d >= 1, when only a tail condition is required on the latent mixing density and we derive sharp lower bounds for these problems