Artificial intelligence (AI) is unlikely to replace doctors. However, its evolving capabilities are rapidly shaping a new future for health care while changing the patient-provider relationship. AI presents opportunities to enhance efficiency within health care systems by automating tasks and optimizing workflows. It can also use protein language models to develop new medications, and enable early detection of health risks, leading to more effective interventions and better patient outcomes. AI is far from a panacea, however, and its drawbacks include, for example, potential biases in algorithms that exacerbate health disparities and data privacy and security risks associated with sensitive patient information. As AI becomes pervasive, governments around the globe need to weigh these and other emerging opportunities and challenges in order to devise effective policies to enhance health care without harm to patients. 

The divergent regulatory approaches toward AI between the United States and the European Union carry significant implications for innovation, patient safety, and market competitiveness in health care. While the EU is prioritizing data privacy and ethical AI deployment through extensive legislation, U.S. regulations have been slower to adapt to the dynamic AI landscape, potentially impacting health care access and outcomes for patients. Responsibly designed regulatory frameworks can help to effectively facilitate the relationship among AI companies, health care providers, and patients to ensure that AI’s risks to patients are mitigated and the benefits of AI’s applications are harnessed.

Divergent EU and U.S. regulations show there is no one-size-fits-all approach to AI

A nurse works in the ICU at a computer
A nurse works in the ICU in Barcelona, Spain in 2020. Cesc Maymo/Getty Images

Under the EU’s 2016 General Data Protection Regulations (GDPR), AI in health care and health data are regulated to ensure compliance with stringent data protection standards within the EU. GDPR stipulates several crucial provisions that impact the use of AI in health care, all of which may hinder the kind of broad data collection necessary to train AI models.

GDPR mandates that the processing of personal data, including health care data, must be lawful, fair, and transparent. It requires health care organizations and AI developers to provide clear information to individuals about how their data will be used in AI systems and obtain explicit consent for such processing. GDPR enforces purpose limitation such that health care data collected for specific purposes should not be repurposed for unrelated activities. AI applications in health care must adhere strictly to these limitations, ensuring that patient data is utilized only for the purposes for which consent was obtained or as permitted under GDPR.

Additionally, data minimization requirements may hinder how AI models can be trained. The regulations stipulate that only the minimum amount of personal data necessary for a specific purpose should be processed. This principle applies to AI systems in health care, necessitating the careful selection and handling of data to ensure minimal exposure of sensitive information to AI models without patients’ consent. The strict requirements on AI developers regarding data security and confidentiality means they must implement appropriate measures to protect personal data from unauthorized access, disclosure, alteration, or destruction, while maintaining transparency with patients. Such measures include encryption, access controls, and regular security assessments to safeguard patient information used in AI systems in health care settings.

The 2024 EU AI Act (AI Act), which focuses on limiting AI harm to humans through a risk-based tier system, builds on GDPR requirements to facilitate trust and transparency among patients, providers, and AI companies, including around the development and functions of AI algorithms. As the AI Act only just passed in early 2024, its implementation and impacts are yet to be fully seen and evaluated. However, proponents  believe that it will ultimately create more patient-centric and outcome-oriented health care systems and technologies due to the tiered risk system designed to prevent the development of AI that could harm human beings.

The U.S. approach to AI regulation has evolved quite differently as compared with the EU. Historically, the relatively hands-off, market-focused approach of the U.S. regulators spurred technological advancements across sectors, including health. With respect to AI, businesses are largely adopting self-regulatory measures and strategies. In 2023, Microsoft, Google and OpenAI, for example, announced their own respective policies pledging responsible AI deployment in different areas, such as content related to elections. While there are health care benefits to the U.S.’ approach to date, without some standardized regulations, the private sector could gain unnecessary and unwarranted access to patient data, thereby undermining privacy and exposing security risks. A 2022 survey of 11,004 Americans showed that 60 percent of respondents were uncomfortable with their health care provider relying on AI, suggesting that both the public and private sectors need to increase awareness of AI’s positive contributions to the health care sector.

U.S. regulation of AI in health care works within a patchwork of existing laws through the Food and Drug Administration (FDA), which focuses on AI-integrated medical devices, and the Health Insurance Portability and Accountability Act of 1996 (HIPAA), which deals with Americans’ health data privacy. In late 2023, the Biden-Harris administration partnered with 15 leading AI companies and 28 total health care providers and payers to develop AI models responsibly, including Allina Health, CVS Health, and Houston Methodist. The commitments are captured in the FAVES–or Fair, Appropriate, Valid, Effective and Safe–principles, intended to be a risk-management framework that each of the signatories monitors to address harms that applications may cause to patients. FAVES includes a requirement to continue researching and developing responsible uses of AI to improve health care outcomes, reduce clinician burnout, and improve patients’ experiences.

Outside of congressional involvement, the Biden-Harris administration has committed to working with AI companies and health care providers to create AI best practices, and passed the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order) in 2023 as a framework for AI development and deployment in the U.S. This Executive Order promotes responsible AI innovations in health care, including a swath of grants and awards through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity to companies that are committed to developing these technologies responsibly. Overall, the U.S.’s approach significantly differs from its EU counterpart’s. Congress’ historical hesitance to expand the FDA’s authority and current partisan gridlock could prevent the U.S. from adopting a singular policy strategy to regulate AI in health care. The Biden-Harris administration has relied on a piecemeal approach through executive orders and private-sector commitments, but it could leave AI companies, health care providers, and patients to navigate a complicated relationship without the stewardship and monitoring of an independent party. AI companies in the U.S. can develop their technologies without much oversight at the outset, leaving the government to slowly rein in the technologies after they have been developed, marketed, and deployed. Conversely, in the EU, GDPR and the AI Act provide boundaries that AI companies must stay within to release their products in health care markets. This approach helps to ensure that patient safety remains the focus of technological advancement. Yet, challenges remain. GDPR’s severity-based financial penalty structure for privacy violations has proven difficult to enforce. Meanwhile, the AI Act’s implementation could lead to a promising variety of health care applications to be deemed “high risk” and banned from the EU at the end of the year.

Responsible deployment of AI needs to be safe by design and patient-focused

A medical worker operates a scanner using a platform that identifies lung injuries through artificial intelligence in Sao Paulo, Brazil in 2020. Nelson Almeida/AFP via Getty Images

The responsible use of AI in health care presents untold opportunities. AI models can improve individual health outcomes and streamline diagnoses, and be used to address social determinants of health (SDOH). Responsible AI use in health care could leverage the technologies’ capabilities to inform targeted health interventions for populations affected by a SDOH. SDOH data can be coded into AI to improve risk identification. In some cases, AI can also accurately predict diseases, such as cardiovascular disease, which accounts for almost a third of global deaths and disproportionately affects individuals of lower socioeconomic status.

However, AI also presents significant challenges to health and health care. Biases are inherent in AI algorithms, which can perpetuate existing health care disparities and inequities if left unchecked. To meet the Centers for Disease Control and Prevention’s (CDC) broad definition of health equity—“the state in which everyone has a fair and just opportunity to attain their highest level of health”—AI guardrails need to capture more granular aspects of equity. For instance, if historical health care data exhibit disparities in treatment based on race or socioeconomic status, AI algorithms may inadvertently perpetuate these inequities by making decisions based on such biased data. Algorithmic biases can be significantly reduced by diversifying datasets used to train AI models, ensuring that they more accurately represent the diverse populations they serve. For example, datasets that allow AI to diagnose cardiovascular diseases, but are trained only on data from male patients, are more likely to misdiagnose cardiac issues in women. Likewise, data from lighter-skinned individuals could cause errors in skin cancer diagnoses for darker-skinned patients. AI’s imperfections lie within the data it is fed by people who create and maintain it, so it is imperative that developers, regulators, and health care systems solve challenges from a patient-focused perspective.

The digital divide hinders patients’ abilities to engage with AI-driven health care solutions. This issue, highlighted by the COVID-19 pandemic, represents a divide between those who can access and benefit from these technologies and those who cannot, deepening existing health inequalities. Ensuring patient access is therefore crucial, as disparities in internet access, digital literacy, and socioeconomic status may limit equitable access to AI-powered health care solutions. AI companies that are working to close the digital divide could assist with investing in more equitable access through the use of government funds and continue expanding on the private-sector commitments secured by the Biden-Harris administration. Bridging the digital divide to expand access and addressing algorithmic biases to enable greater equity are important steps toward harnessing the potential of AI to improve health care outcomes while upholding ethical standards and safeguarding patient rights. Finally, improved transparency and privacy are important steps toward increasing patient trust. In the EU, the GDPR affords patients more control over their data. However, while HIPAA seeks to achieve a similar goal with health care data for Americans, consumers must place their trust in private-sector health care, AI developers, and health insurance companies, all of which may have competing priorities and interests. Regulations and oversight could fail patients in the long term because of AI’s inherent and massive appetite for data. One alternative method of patient empowerment is to train AI with anonymized data. This approach is promising, but it needs to center on patient privacy. In 2018, a study used an algorithm to analyze National Health and Nutrition Examination Survey data on physical activity from 14,451 individuals. The algorithm successfully reidentified 85.6 percent of adults and 69.8 percent of children despite the removal of protected health identifiers.

Looking ahead

A robotic arm for brain surgery
A robotic arm for brain surgery is seen at the 2019 World Robot Conference in Beijing in 2019. Wang Zhao/AFP via Getty Images

Ensuring patient safety and data privacy amidst the proliferation of AI-driven health care technologies calls for robust governance frameworks and transparent practices. The application of AI holds the potential to boost efficiency and improve patient outcomes, but it also sparks concerns regarding ethical considerations. Examining regulatory differences between the U.S. and EU underscores the need to harmonize efforts to establish responsible AI frameworks around the world, prioritizing patient safety and supporting equal access to health care advancements. While the U.S. and EU are largely tackling AI-related challenges through different approaches, there could be pathways toward learning from each other’s experiences and fostering collaboration.

Regardless of approach, improving data quality to remove algorithmic biases, mitigating SDOH as barriers, and closing the digital divide are essential steps toward responsible deployment of AI in health care. A patient-centric approach is key to the responsible and effective deployment of AI technologies across the health care sector and around the world.

By Muhamed Sulejmanagic (Graduate Research Assistant) and Isabel Schmidt (Senior Policy Analyst and Research Manager).