Dynamic importance sampling for uniformly recurrent Markov chains
成果类型:
Article
署名作者:
Dupuis, P; Wang, H
署名单位:
Brown University
刊物名称:
ANNALS OF APPLIED PROBABILITY
ISSN/ISSBN:
1050-5164
DOI:
10.1214/105051604000001016
发表日期:
2005
页码:
1-38
关键词:
large deviations theory
monte-carlo
additive processes
Rare events
simulation
摘要:
Importance sampling is a variance reduction technique for efficient estimation of rare-event probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, however, has suggested that such schemes do not work well in many situations. In this paper we consider dynamic importance sampling in the setting of uniformly recurrent Markov chains. By dynamic we mean that in the course of a single simulation, the change of measure can depend on the outcome of the simulation up till that time. Based on a control-theoretic approach to large deviations, the existence of asymptotically optimal dynamic schemes is demonstrated in great generality. The implementation of the dynamic schemes is carried out with the help of a limiting Bellman equation. Numerical examples are presented to contrast the dynamic and standard schemes.