Bayesian representation of stochastic processes under learning: De Finetti revisited
成果类型:
Article
署名作者:
Jackson, MO; Kalai, E; Smorodinsky, R
署名单位:
California Institute of Technology; Northwestern University; Technion Israel Institute of Technology
刊物名称:
ECONOMETRICA
ISSN/ISSBN:
0012-9682
DOI:
10.1111/1468-0262.00055
发表日期:
1999
页码:
875-893
关键词:
Nash equilibrium
INFORMATION
games
摘要:
A probability distribution governing the evolution of a stochastic process has infinitely many Bayesian representations of the form mu = S(Theta)mu(theta) d lambda(theta). Among these, a natural representation is one whose components (mu(theta)'s) are learnable (one can approximate mu(theta), by conditioning mu on observation of the process) and sufficient for prediction (mu(theta)'s predictions are not aided by conditioning on observation of the process). We show the existence and uniqueness of such a representation under a suitable asymptotic mixing condition on the process. This representation can be obtained by conditioning on the tail-field of the process, and any learnable representation that is sufficient for prediction is asymptotically like the tail-field representation. This result is related to the celebrated de Finetti theorem, but with exchangeability weakened to an asymptotic mixing condition, and with his conclusion of a decomposition into i.i.d. component distributions weakened to components that are learnable and sufficient for prediction.