
Abstract
Artificial Intelligence (AI), when scaled responsibly, holds significant potential for healthcare systems. Yet significant barriers to its adoption remain, including fragmented data foundations, regulatory uncertainty, and gaps in governance and workforce capacity. Unleashing AI’s potential to benefit everyone’s health requires the balancing of market forces and health culture.
OECD Member countries are undertaking initiatives to address these gaps, such as establishing a strategies and action plans at the intersection of AI and health. To support these actions, a coherent policy checklist was developed to guide decision making and prioritisation and to avoid blind spots.
The checklist is organised into four pillars: establishing enablers (for data foundations, assuring and scaling AI, and capacity building); implementing guardrails (to oversee and monitor progress toward common objectives); engaging meaningfully with the public, providers and industry; and deploying trustworthy AI. Across the four pillars, nine main policy categories and 43 questions have emerged as critical for responsibly scaling the benefits of AI in health.
Action will be accelerated by learning from each other and solving challenges together. A shared recognition has emerged: that coherent, cross-border compatible policies are essential to balance innovation with safety, and economic opportunity with building public trust.
Executive summary
Artificial Intelligence (AI) holds significant potential for the healthcare system. That potential is not being fully realised due to fragmented data foundations, non-aligned policies and practices, and structural and governance barriers to scalability. Although AI is universally used in administration across OECD Member countries (100%), national-level scale‑up remains limited, (e.g. only 10% for medical imaging applications).
Today, there are well-documented risks associated with the use of AI in healthcare, such as skewed data, privacy and security risks, insufficient transparency or oversight, and the potential for job displacement and de‑personalisation. While caution is necessary, there is also risk in inaction.
The opportunity from AI in health will be unleashed when we can responsibly scale. This requires a balance between market forces (that move fast), health culture (doing no harm), and reaching every person (through scale).
While AI is already being used in health across OECD Member countries, responsible and scalable adoption remains impacted by structural, regulatory, and governance gaps. OECD Member countries are undertaking initiatives to address these gaps:
Establishing a strategy or action plan at the intersection of AI and health (18%),
Establishing an oversight body for the use of AI in health (18%),
Establishing a national approach to regulatory sandbox with a focus on AI in health (18%),
Streamlining the national approach to health technology assessments to include AI (24%),
Updating national procurement guidelines to account for AI in health (11%),
Establishing national approach to improve the use of AI in the health workforce (29%), and
Developing national legislation for AI in health (3%).
To help support these actions toward the responsible scale of AI in health, a coherent policy checklist was developed to guide decision making and prioritisation. The checklist is built on OECD AI principles and frameworks and developed in partnership with the Global Digital Health Partnership (GDHP) and Coalition for Health AI (CHAI) as well as the OECD AI in Health Expert Group.
This AI in Health Policy Checklist identifies policymaker, technologist, and health workforce actions to responsibly scale AI in health. Critically, the checklist can be used to identify blind spots in those actions. The checklist is not prescriptive; however, it provides a prompt for decision makers to consider a full range of action across relevant policy categories and areas.
The four pillars of the checklist focus on establishing enablers (for data foundations, assuring and scaling AI, and capacity building); implementing guardrails (to oversee and monitor progress towards agreed objectives); engaging meaningfully with the public, providers, and industry; and deploying trustworthy AI. Across the four pillars, nine main policy categories and 42 questions have emerged as critical for responsibly scaling the benefits of AI in health:
Better use of data – Without data, AI solutions cannot function effectively. Considerations for data in healthcare include that they are findable, accessible, interoperable, and reusable (FAIR), along with being representative of the population for both primary and analytic uses. Emerging leading practice includes the establishment of country-led health data authorities (e.g. across Europe) of equivalent governance structures to ensure compliance with data protection laws while also facilitating AI adoption using secure and quality datasets.
Guidance to enable scale of AI – To support industry (developers) and implementers (governments, health workforce, public) to move AI from pilot to widespread deployment, tailored policy guidance is needed. An emerging leading practice includes the development of model cards (e.g. from the Coalition for Health AI), which certify compliance, transparency, and accountability of AI solution when applied to real-worlds settings.
People capacity – A skilled and knowledgeable health workforce is essential for the uptake and sustained use of AI solutions in healthcare. Emerging leading practices include proactive planning and workforce upskilling across both frontline and back-end health roles (e.g. the United Kingdom Digital and Data Professional Capability Framework).
Technical capacity – A secure, interoperable and adaptive technical infrastructure (e.g. computing capacity, data storage, connectivity) is a cornerstone for deploying and scaling AI solutions from local to cross-national levels. Robust infrastructure ensures that AI tools can process large, complex datasets in real time, integrate across diverse health information systems, and operate safely and reliably to support both primary and analytic use of health data.
Agreeing upon common objectives – Support the development of common guardrails, strategies, and collective activities, which ensure the streamlined development and implementation of AI in health. An emerging leading practice includes the development of a national strategy that addresses the unique aspects of AI development, deployment and use in healthcare. Such strategies have been developed by seven OECD countries with several under development.
Oversight, measurement and monitoring – Given the rapid advancements of AI technologies, it is necessary to understand the potential benefits and risks while taking action to optimise benefit while protecting from harms. Emerging leading practices include the development of indicators to assess clinical effectiveness and economic impact of AI at scale.
Public – It is critical to engage and educate the public to foster trust in AI in health and increasingly empower their active participation. An emerging leading practice includes the establishment of public assemblies to integrate public voice into the work on AI in Health (e.g. The French citizen’s assembly for digital health).
Healthcare providers – Many AI solutions in health are used by both front and back-end healthcare workers. An emerging leading practice includes mandating education on AI within the curriculum of health professionals (e.g. in Korea and others).
Industry – Active engagement and collaboration with industry to support literacy, common understanding, and alignment in AI solutions. Emerging leading practices include developing collaborative and transparent processes for industry to engage with governments to test, validate, and support the integration of AI solutions (e.g. the United Kingdom NHS AI Lab).
Trustworthy use of AI – Trust underpins the use of AI in health. There is an imperative to ensure that humans, and health promotion are at the core of any AI solution brought into the health ecosystem, while doing no harm. Emerging leading practices include the development and use of ethical impact assessments for AI solutions in health (e.g. New Zealand’s checklist embedding ethics in tool evaluation).
While many countries are making individual progress, there is strategic opportunity for multi-lateral collaboration to reduce the unnecessary barriers to scale. A shared recognition is emerging: coherent, cross-border compatible policies are essential to balance innovation with safety, and economic opportunity with building public trust.