Uniform convergence of exact large deviations for renewal reward processes

成果类型:
Article
署名作者:
Chi, Zhiyi
署名单位:
University of Connecticut
刊物名称:
ANNALS OF APPLIED PROBABILITY
ISSN/ISSBN:
1050-5164
DOI:
10.1214/105051607000000023
发表日期:
2007
页码:
1019-1048
关键词:
boundary crossing probabilities markov additive processes LIMIT-THEOREMS asymptotic expansions
摘要:
Let (X-n, Y-n) be i.i.d. random vectors. Let W(x) be the partial sum of Yn just before that of X-n exceeds x > 0. Motivated by stochastic models for neural activity, uniform convergence of the form sup(c is an element of I) vertical bar a(c, x) Pr{W(x) >= cx} - 1 vertical bar = o(1), x -> infinity, is established for probabilities of large deviations, with a (c, x) a deterministic function and I an open interval. To obtain this uniform exact large deviations principle (LDP), we first establish the exponentially fast uniform convergence of a family of renewal measures and then apply it to appropriately tilted distributions Of X-n and the moment generating function of W(x). The uniform exact LDP is obtained for cases where X-n has a subcomponent with a smooth density and Y-n is not a linear transform of X-n. An extension is also made to the partial sum at the first exceedance time.