A virtual rodent predicts the structure of neural activity across behaviours

成果类型:
Article
署名作者:
Aldarondo, Diego; Merel, Josh; Marshall, Jesse D.; Hasenclever, Leonard; Klibaite, Ugne; Gellis, Amanda; Tassa, Yuval; Wayne, Greg; Botvinick, Matthew; Olveczky, Bence P.
署名单位:
Harvard University; Harvard University; Alphabet Inc.; DeepMind; Google Incorporated; University of London; University College London
刊物名称:
Nature
ISSN/ISSBN:
0028-3935
DOI:
10.1038/s41586-024-07633-4
发表日期:
2024-08-15
页码:
594-+
关键词:
arm movements motor cortex DYNAMICS body computation network models SYSTEM driven FORCE
摘要:
Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviours. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals. Here, to facilitate this, we built a 'virtual rodent', in which an artificial neural network actuates a biomechanically realistic model of the rat(1) in a physics simulator(2). We used deep reinforcement learning(3-5) to train the virtual agent to imitate the behaviour of freely moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behaviour. We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent's network activity than by any features of the real rat's movements, consistent with both regions implementing inverse dynamics(6). Furthermore, the network's latent variability predicted the structure of neural variability across behaviours and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control(7). These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behaviour and relate it to theoretical principles of motor control.
来源URL: