Skip to main content

WHO outlines principles for ethics in health AI

WHO outlines principles for ethics in health AI

/

They include protecting autonomy and ensuring equity

Share this story

SWITZERLAND-HEALTH-VIRUS-WHO
Photo by FABRICE COFFRINI/AFP via Getty Images

The World Health Organization released a guidance document outlining six key principles for the ethical use of artificial intelligence in health. Twenty experts spent two years developing the guidance, which marks the first consensus report on AI ethics in healthcare settings.

The report highlights the promise of health AI, and its potential to help doctors treat patients — particularly in under-resourced areas. But it also stresses that technology is not a quick fix for health challenges, particularly in low- and middle-income countries, and that governments and regulators should carefully scrutinize where and how AI is used in health.

The WHO said it hopes the six principles can be the foundation for how governments, developers, and regulators approach the technology. The six principles its experts came up with are: protecting autonomy; promoting human safety and well-being; ensuring transparency; fostering accountability; ensuring equity; and promoting tools that are responsive and sustainable.

There are dozens of potential ways AI can be used in healthcare. There are applications in development that use AI to screen medical images like mammograms, tools that scan patient health records to predict if they might get sicker, devices that help people monitor their own health, and systems that help track disease outbreaks. In areas where people don’t have access to specialist doctors, tools could help evaluate symptoms. But when they’re not developed and implemented carefully, they can — at best — not live up to their promise. At worst, they can cause harm.

Some of the pitfalls were clear during the past year. In the scramble to fight the COVID-19 pandemic, healthcare institutions and governments turned to AI tools for solutions. Many of those tools, though, had some of the features the WHO report warns against. In Singapore, the government admitted that a contact tracing application collected data that could also be used in criminal investigations — an example of “function creep,” where health data was repurposed beyond the original goal. Most AI programs that aimed to detect COVID-19 based on chest scans were based on poor data and didn’t end up being useful. Hospitals in the United States used an algorithm designed to predict which COVID-19 patients might need intensive care before the program was tested.

“An emergency does not justify deployment of unproven technologies”

“An emergency does not justify deployment of unproven technologies,” the report said.

The report also recognized that many AI tools are developed by large, private technology companies (like Google and Chinese company Tencent) or by partnerships between the public and private sector. Those companies have the resources and data to build these tools, but may not have incentives to adopt the proposed ethical framework for their own products. Their focus may be toward profit, rather than the public good. “While these companies may offer innovative approaches, there is concern that they might eventually exercise too much power in relation to governments, providers and patients,” the report reads.

AI technology in healthcare is still new, and many governments, regulators, and health systems are still figuring out how to evaluate and manage them. Being thoughtful and measured in the approach will help avoid potential harm, the WHO report said. “The appeal of technological solutions and the promise of technology can lead to overestimation of the benefits and dismissal of the challenges and problems that new technologies such as AI may introduce.”


Here’s a breakdown of the six ethical principles in the WHO guidance and why they matter:

  • Protect autonomy: Humans should have oversight of and the final say on all health decisions — they shouldn’t be made entirely by machines, and doctors should be able to override them at any time. AI shouldn’t be used to guide someone’s medical care without their consent, and their data should be protected.
  • Promote human safety: Developers should continuously monitor any AI tools to make sure they’re working as they’re supposed to and not causing harm.
  • Ensure transparency: Developers should publish information about the design of AI tools. One regular criticism of the systems is that they’re “black boxes,” and it’s too hard for researchers and doctors to know how they make decisions. The WHO wants to see enough transparency that they can be fully audited and understood by users and regulators.
  • Foster accountability: When something goes wrong with an AI technology — like if a decision made by a tool leads to patient harm — there should be mechanisms determining who is responsible (like manufacturers and clinical users).
  • Ensure equity: That means making sure tools are available in multiple languages, that they’re trained on diverse sets of data. In the past few years, close scrutiny of common health algorithms has found that some have racial bias built in.
  • Promote AI that is sustainable: Developers should be able to regularly update their tools, and institutions should have ways to adjust if a tool seems ineffective. Institutions or companies should also only introduce tools that can be repaired, even in under-resourced health systems.