Uniform Markov renewal theory and ruin probabilities in markov random walks

成果类型:
Article
署名作者:
Fuh, CD
署名单位:
Academia Sinica - Taiwan
刊物名称:
ANNALS OF APPLIED PROBABILITY
ISSN/ISSBN:
1050-5164
DOI:
10.1214/105051604000000260
发表日期:
2004
页码:
1202-1241
关键词:
corrected diffusion approximations 1st passage times asymptotic expansions LIMIT-THEOREMS CONVERGENCE
摘要:
Let {X-n, n greater than or equal to 0} be a Markov chain on a general state space X with transition probability P and stationary probability pi. Suppose an additive component S-n takes values in the real line R and is adjoined to the chain such that {(X-n, S-n), n greater than or equal to 0} is a Markov random walk. In this paper, we prove a uniform Markov renewal theorem with an estimate on the rate of convergence. This result is applied to boundary crossing problems for {(X-n, S-n), n greater than or equal to 0}. To be more precise, for given b greater than or equal to 0, define the stopping time tau = tau(b) = inf{n:S-n > b}. When a drift mu of the random walk S-n is 0, we derive a one-term Edgeworth type asymptotic expansion for the first passage probabilities P-pi {tau < m} and P-pi {tau < m, S-m < c}, where m less than or equal to infinity, c less than or equal to b and P-pi denotes the probability under the initial distribution pi. When mu not equal 0, Brownian approximations for the first passage probabilities with correction terms are derived. Applications to sequential estimation and truncated tests in random coefficient models and first passage times in products of random matrices are also given.