Hierarchical Bayes versus empirical Bayes density predictors under general divergence loss

成果类型:
Article
署名作者:
Ghosh, M.; Kubokawa, T.
署名单位:
State University System of Florida; University of Florida; University of Tokyo
刊物名称:
BIOMETRIKA
ISSN/ISSBN:
0006-3444
DOI:
10.1093/biomet/asy073
发表日期:
2019
页码:
495500
关键词:
goodness
摘要:
Consider the problem of finding a predictive density of a new observation drawn independently of observations sampled from a multivariate normal distribution with the same unknown mean vector and the same known variance under general divergence loss. In this paper, we consider two kinds of prior distribution for the mean vector: one is a multivariate normal distribution with mean based on unknown regression coefficients, and the other further assumes that the regression coefficients have uniform prior distributions. The two kinds of prior distribution provide, respectively, the empirical Bayes and hierarchical Bayes predictive distributions. Both predictive distributions have the same mean, but they have different covariance matrices, with the hierarchical Bayes predictive distribution having a larger covariance matrix. We compare the two Bayesian predictive densities in terms of their frequentist risks under the general divergence loss and show that the hierarchical Bayes predictive density has a uniformly smaller risk than the empirical Bayes predictive density. As an offshoot of our result, we show that best linear unbiased predictors in mixed linear models, optimal under normality and squared error loss, maintain their optimality under the general divergence loss.