Mitigating bias in AI at the point of care
成果类型:
Editorial Material
署名作者:
DeCamp, Matthew; Lindvall, Charlotta
署名单位:
University of Colorado System; University of Colorado Anschutz Medical Campus; Harvard University; Harvard University Medical Affiliates; Dana-Farber Cancer Institute; Harvard University; Harvard University Medical Affiliates; Dana-Farber Cancer Institute; Harvard University; Harvard Medical School
刊物名称:
SCIENCE
ISSN/ISSBN:
0036-13124
DOI:
10.1126/science.adh2713
发表日期:
2023-07-14
页码:
150-152
关键词:
摘要:
Artificial intelligence (AI) shows promise for improving basic and translational science, medicine, and public health, but its success is not guaranteed. Numerous examples have arisen of racial, ethnic, gender, disability, and other biases in AI applications to health care. In ethics, bias generally refers to any systematic, unfair favoring of people in terms of how they are treated or the outcomes they experience. Consensus has emerged among scientists, ethicists, and policy-makers that minimizing bias is a shared responsibility among all involved in AI development. For example, ensuring equity by eliminating bias in AI is a core principle of the World Health Organization for governing AI (1). But ensuring equity will require more than unbiased data and algorithms. It will also require reducing biases in how clinicians and patients use AI-based algorithms-a potentially more challenging task than reducing biases in algorithms themselves.