BAYESIAN INVERSE REINFORCEMENT LEARNING FOR COLLECTIVE ANIMAL MOVEMENT

成果类型:
Article
署名作者:
Schafer, Toryn L. J.; Wikle, Christopher K.; Hooten, Mevin B.
署名单位:
University of Missouri System; University of Missouri Columbia; United States Department of the Interior; United States Geological Survey; Colorado State University System; Colorado State University Fort Collins; Colorado State University System; Colorado State University Fort Collins; Colorado State University System; Colorado State University Fort Collins
刊物名称:
ANNALS OF APPLIED STATISTICS
ISSN/ISSBN:
1932-6157
DOI:
10.1214/21-AOAS1529
发表日期:
2022
页码:
999-1013
关键词:
models
摘要:
Agent-based methods allow for defining simple rules that generate complex group behaviors. The governing rules of such models are typically set a priori, and parameters are tuned from observed behavior trajectories. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long-term behavior policies by using properties of a Markov decision process. We use the computationally efficient linearly-solvable Markov decision process to learn the local rules governing collective movement for a simulation of the selfpropelled-particle (SPP) model and a data application for a captive guppy population. The estimation of the behavioral decision costs is done in a Bayesian framework with basis function smoothing. We recover the true costs in the SPP simulation and find the guppies value collective movement more than targeted movement toward shelter.
来源URL: