Smart active particles learn and transcend bacterial foraging strategies

成果类型:
Article
署名作者:
Nasiri, Mahdi; Loran, Edwin; Liebchen, Benno
署名单位:
Technical University of Darmstadt
刊物名称:
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
ISSN/ISSBN:
0027-13718
DOI:
10.1073/pnas.2317618121
发表日期:
2024-04-09
关键词:
levy walks search range reinforcement chemotaxis
摘要:
Throughout evolution, bacteria and other microorganisms have learned efficient foraging strategies that exploit characteristic properties of their unknown environment. While much research has been devoted to the exploration of statisticalmodels describing the dynamics of foraging bacteria and other (micro-) organisms, little is known, regarding the question of how good the learned strategies actually are. This knowledge gap is largely caused by the absence of methods allowing to systematically develop alternative foraging strategies to compare with. In the present work, we use deep reinforcement learning to show that a smart run-and-tumble agent, which strives to find nutrients for its survival, learns motion patterns that are remarkably similar to the trajectories of chemotactic bacteria. Strikingly, despite this similarity, we also find interesting differences between the learned tumble rate distribution and the one that is commonly assumed for the run and tumble model. We find that these differences equip the agent with significant advantages regarding its foraging and survival capabilities. Our results uncover a generic route to use deep reinforcement learning for discovering search and collection strategies that exploit characteristic but initially unknown features of the environment. These results can be used, e.g., to program future microswimmers, nanorobots, and smart active particles for tasks like searching for cancer cells, microwaste collection, or environmental remediation.