A discrete bouncy particle sampler

成果类型:
Article
署名作者:
Sherlock, C.; Thiery, A. H.
署名单位:
Lancaster University; National University of Singapore
刊物名称:
BIOMETRIKA
ISSN/ISSBN:
0006-3444
DOI:
10.1093/biomet/asab013
发表日期:
2022
页码:
335349
关键词:
metropolis-hastings variance reduction
摘要:
Most Markov chain Monte Carlo methods operate in discrete time and are reversible with respect to the target probability. Nevertheless, it is now understood that the use of nonreversible Markov chains can be beneficial in many contexts. In particular, the recently proposed bouncy particle sampler leverages a continuous-time and nonreversible Markov process, and empirically shows state-of-the-art performance when used to explore certain probability densities; however, its implementation typically requires the computation of local upper bounds on the gradient of the log target density. We present the discrete bouncy particle sampler, a general algorithm based on a guided random walk, a partial refreshment of direction and a delayed-rejection step. We show that the bouncy particle sampler can be understood as a scaling limit of a special case of our algorithm. In contrast to the bouncy particle sampler, implementing the discrete bouncy particle sampler only requires pointwise evaluation of the target density and its gradient. We propose extensions of the basic algorithm for situations when the exact gradient of the target density is not available. In a Gaussian setting, we establish a scaling limit for the radial process as the dimension increases to infinity. We leverage this result to obtain the theoretical efficiency of the discrete bouncy particle sampler as a function of the partial-refreshment parameter, which leads to a simple and robust tuning criterion. A further analysis in a more general setting suggests that this tuning criterion applies more generally. Theoretical and empirical efficiency curves are then compared for different targets and algorithm variations.