Skip to main content

AI systems should be accountable, explainable, and unbiased, says EU

AI systems should be accountable, explainable, and unbiased, says EU

/

The European Union has published new guidelines on developing ethical AI

Share this story

The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence.

These rules aren’t like Isaac Asimov’s “Three Laws of Robotics.” They don’t offer a snappy, moral framework that will help us control murderous robots. Instead, they address the murky and diffuse problems that will affect society as we integrate AI into sectors like health care, education, and consumer technology.

So, for example, if an AI system diagnoses you with cancer sometime in the future, the EU’s guidelines would want to make sure that a number of things take place: that the software wasn’t biased by your race or gender, that it didn’t override the objections of a human doctor, and that it gave the patient the option to have their diagnosis explained to them.

So, yes, these guidelines are about stopping AI from running amuck, but on the level of admin and bureaucracy, not Asimov-style murder mysteries.

To help with this goal, the EU convened a group of 52 experts who came up with seven requirements they think future AI systems should meet. They are as follows:

  • Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.
  • Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.
  • Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.
  • Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.
  • Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
  • Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.

You’ll notice that some of these requirements are pretty abstract and would be hard to assess in an objective sense. (Definitions of “positive social change,” for example, vary hugely from person to person and country to country.) But others are more straightforward and could be tested via government oversight. Sharing the data used to train government AI systems, for example, could be a good way to fight against biased algorithms.

These guidelines aren’t legally binding, but they could shape any future legislation drafted by the European Union. The EU has repeatedly said it wants to be a leader in ethical AI, and it has shown with GDPR that it’s willing to create far-reaching laws that protect digital rights.

But this role has been partly forced on the EU by circumstance. It can’t compete with America and China — the world’s leaders in AI — when it comes to investment and cutting-edge research, so it’s chosen ethics as its best bet to shape the technology’s future.

The EU wants to shape global AI development through ethics

As part of that effort, today’s report includes what’s being called a “Trustworthy AI assessment list” — a list of questions that can help experts figure out any potential weak spots or dangers in AI software. This list includes questions like “Did you verify how your system behaves in unexpected situations and environments?” and “Did you assess the type and scope of data in your data set?”

These assessment lists are just preliminary, but the EU will be gathering feedback from companies in the coming years, with a final report on their utility due in 2020.

Fanny Hidvégi, a policy manager at digital rights group Access Now and an expert who helped write today’s guidelines, said the assessment list was the most important part of the report. “It provides a practical, forward-looking perspective” on how to mitigate potential harms of AI, Hidvégi told The Verge.

“In our view the EU has the potential and responsibility to be in the forefront of this work,” said Hidvégi. “But we do think that the European Union should not stop at ethics guidelines ... It can only come on top of legal compliance.”

Others are doubtful that the EU’s attempt to shape how global AI is developed through ethics research will have much of an effect.

“We are skeptical of the approach being taken, the idea that by creating a golden standard for ethical AI it will confirm the EU’s place in global AI development,” Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, told The Verge. “To be a leader in ethical AI you first have to lead in AI itself.”