AI and biosecurity: The need for governance

成果类型:
Editorial Material
署名作者:
Bloomfield, Doni; Pannu, Jaspreet; Zhu, Alex W.; Ng, Madelena Y.; Lewis, Ashley; Bendavid, Eran; Asch, Steven M.; Hernandez-Boussard, Tina; Cicero, Anita; Inglesby, Tom
署名单位:
Johns Hopkins University; Johns Hopkins Bloomberg School of Public Health; Fordham University; Stanford University; Stanford University; Stanford University; Stanford University
刊物名称:
SCIENCE
ISSN/ISSBN:
0036-11187
DOI:
10.1126/science.adq1977
发表日期:
2024-08-23
页码:
831-833
关键词:
摘要:
Governments should evaluate advanced models and if needed impose safety measures Great benefits to humanity will likely ensue from advances in artificial intelligence (AI) models trained on or capable of meaningfully manipulating substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields (1-3). But as with any powerful new technology, such biological models will also pose considerable risks. Because of their general-purpose nature, the same biological model able to design a benign viral vector to deliver gene therapy could be used to design a more pathogenic virus capable of evading vaccine-induced immunity (4). Voluntary commitments among developers to evaluate biological models' potential dangerous capabilities are meaningful and important but cannot stand alone. We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics.