Training samples in objective Bayesian model selection
成果类型:
Article
署名作者:
Berger, JO; Pericchi, LR
署名单位:
Duke University; University of Puerto Rico; University of Puerto Rico Rio Piedras
刊物名称:
ANNALS OF STATISTICS
ISSN/ISSBN:
0090-5364
DOI:
10.1214/009053604000000229
发表日期:
2004
页码:
841-869
关键词:
default
hypotheses
priors
摘要:
Central to several objective approaches to Bayesian model selection is the use of training samples (subsets of the data), so as to allow utilization of improper objective priors. The most common prescription for choosing training samples is to choose them to be as small as possible, subject to yielding proper posteriors; these are called minimal training samples. When data can vary widely in terms of either information content or impact on the improper priors, use of minimal training samples can be inadequate. Important examples include certain cases of discrete data, the presence of censored observations, and certain Situations involving linear models and explanatory variables. Such Situations require more sophisticated methods of choosing training samples. A variety of such methods are developed in this paper, and Successfully applied in challenging Situations.