ALMOND: Adaptive Latent Modeling and Optimization via Neural Networks and Langevin Diffusion
成果类型:
Article
署名作者:
Qiu, Yixuan; Wang, Xiao
署名单位:
Carnegie Mellon University; Purdue University System; Purdue University
刊物名称:
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
ISSN/ISSBN:
0162-1459
DOI:
10.1080/01621459.2019.1691563
发表日期:
2021
页码:
1224-1236
关键词:
maximum-likelihood
CONVERGENCE
APPROXIMATE
inference
algorithm
ecm
em
摘要:
Latent variable models cover a broad range of statistical and machine learning models, such as Bayesian models, linear mixed models, and Gaussian mixture models. Existing methods often suffer from two major challenges in practice: (a) a proper latent variable distribution is difficult to be specified; (b) making an exact likelihood inference is formidable due to the intractable computation. We propose a novel framework for the inference of latent variable models that overcomes these two limitations. This new framework allows for a fully data-driven latent variable distribution via deep neural networks, and the proposed stochastic gradient method, combined with the Langevin algorithm, is efficient and suitable for complex models and big data. We provide theoretical results for the Langevin algorithm, and establish the convergence analysis of the optimization method. This framework has demonstrated superior practical performance through simulation studies and a real data analysis. Supplementary materials for this article are available online.