How linear reinforcement affects Donsker's theorem for empirical processes

成果类型:
Article
署名作者:
Bertoin, Jean
署名单位:
University of Zurich
刊物名称:
PROBABILITY THEORY AND RELATED FIELDS
ISSN/ISSBN:
0178-8051
DOI:
10.1007/s00440-020-01001-9
发表日期:
2020
页码:
1173-1192
关键词:
Asymptotics limit
摘要:
A reinforcement algorithm introduced by Simon (Biometrika 42(3/4):425-440, 1955) produces a sequence of uniform random variables with long range memory as follows. At each step, with a fixed probability p is an element of (0, 1), (U) over cap (n+1) is sampled uniformly from (U) over cap (1), ... , (U) over cap (n), and with complementary probability 1 - p, (U) over cap (n+1) is a new independent uniform variable. The Glivenko-Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when p < 1/2, and that a further rescaling is needed when p > 1/2 and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlatedBernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.
来源URL: