On the Iterated Estimation of Dynamic Discrete Choice Games
成果类型:
Article
署名作者:
Bugni, Federico A.; Bunting, Jackson
署名单位:
Duke University
刊物名称:
REVIEW OF ECONOMIC STUDIES
ISSN/ISSBN:
0034-6527
DOI:
10.1093/restud/rdaa032
发表日期:
2021
页码:
1031-1073
关键词:
sequential estimation
likelihood-estimation
structural models
gmm
摘要:
We study the first-order asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of PIs employed in the estimation. This class nests several estimators proposed in the literature. By considering a pseudo likelihood criterion function, our estimator becomes the Kpseudo maximum likelihood (PML) estimator in Aguirregabiria and Mira (2002, 2007). By considering a minimum distance criterion function, it defines a new K-minimum distance (MD) estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007). First, we establish that the K-PML estimator is consistent and asymptotically normal for any K. N. This complements findings in Aguirregabiria and Mira (2007), who focus on K = 1 andK large enough to induce convergence of the estimator. Furthermore, we show under certain conditions that the asymptotic variance of the K-PML estimator can exhibit arbitrary patterns as a function of K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for any K epsilon N. For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-PML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant toK. The invariance result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-PML estimators. Our main result implies two new corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is efficient in the class of K-MD estimators for all K epsilon N. In other words, additional PIs do not provide first-order efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally efficient than any K-PML estimator for all K epsilon N. Finally, the Appendix provides appropriate conditions under which the optimal 1-MD estimator is efficient among regular estimators.