The European Union has published a new framework to regulate the use of artificial intelligence across the bloc’s 27 member states. The proposal, which will take years to implement into law and will be subject to many tweaks and amendments during this time, nevertheless constitutes the most ambitious AI regulations seen globally to date.
The regulations cover a wide range of applications, from software in self-driving cars to algorithms used to vet job candidates, and arrive at a time when countries around the world are struggling with the ethical ramifications of artificial intelligence. Similar to the EU’s data privacy law, GDPR, the regulation gives the bloc the ability to fine companies that infringe its rules up to 6 percent of their global revenues, though such punishments are extremely rare.
“It’s our first ever legal framework on artificial intelligence.”
“It is a landmark proposal of this Commission. It’s our first ever legal framework on artificial intelligence,” said European Commissioner Margrethe Vestager during a press conference. “Today we aim to make Europe world-class in the development of secure, trustworthy, and human-centered artificial intelligence. And, of course, the use of it.”
Civil rights groups are wary about some aspects of the proposal. One of the most important components is a ban on four specific AI use cases throughout the EU. These bans are intended to protect citizens from applications that infringe on their rights, but critics say some prohibitions are too vaguely worded to actually stop harm.
One such prohibition is a ban on the use of real-time “remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement,” which would include facial recognition. The regulation adds, though, that there are numerous exceptions to this prohibition, including letting police use such systems to find the “perpetrator or suspect” of any criminal act that carries a minimum three-year sentence. Using technology like facial recognition in these cases would need to be approved by judiciaries or similar bodies, but this would still give law enforcement an extremely broad scope to deploy AI surveillance as and when they want.
“There are just too many exceptions and they are quite arbitrary,” Alexandra Geese, a German Green MEP, told The Verge regarding the regulation. “I don’t see the point of saying ‘we don’t want biometric recognition’ and then granting continuous exceptions.”
Sarah Chander, a senior policy advisor for digital rights group EDRi, agreed and said the proposal only offered “a veneer of fundamental rights protection.” Chander told The Verge that banning mass biometric surveillance outright was “the only way to safeguard democracy.”
“There are just too many exceptions and they are quite arbitrary.”
An earlier leaked version of the proposal did feature a much stricter ban on AI for mass surveillance, but this was evidently watered down. As Geese told The Verge: “It looks like the Commission said, ‘Look we want to ban this thing,’ and then the safety authorities of the member states, the ministers of the interior, stepped in and said ‘No way.’”
The fact that the ban mentions only “real-time” biometric identification also constitutes a substantial loophole. It seems to suggest that using facial recognition software after any target images have been captured is allowed. That would clear the way for European police to use services like the controversial Clearview AI, which has seen rapid uptake in the US.
Other prohibited applications of AI include the use of the technology to create social credit scores, to cause physical or physiological harm to people, and to manipulate people’s behavior using subliminal cues. “We prohibit them altogether because we simply consider them to be unacceptable,” said Vestager of these use cases during the press conference.
All other AI applications are sorted into three groups of stratified risk. The bottom category includes common, low-risk applications like spam filters, for which regulation will likely not change at all. Above those are “limited-risk” uses cases like chatbot used to buy tickets or find out information, which will require a little more oversight. And above those are high-risk uses of AI, which are “the main focus of the framework,” as Vestager said.
AI systems like chatbots would be required to identify themselves to users
The high-risk applications are those which affect material aspects of people’s lives, like algorithms used to assess someone’s credit score or whether they can get a loan, as well as AI tools that control critical machinery like autonomous vehicles and medical devices. Their deployment and development will be overseen by various regulatory mechanisms — in some cases, based on existing national regulators like those devoted to digital privacy.
“Those AI systems will be subject to a set of five strict obligations because they will potentially have a huge impact on our lives,” said Vestager at the conference. These requirements include the mandatory use of “high-quality” training data to avoid bias, the implementation of “human oversight” into each system, and the creation of detailed documentation that explains how the software works to both regulators and users.
Another notable aspect is compulsory transparency obligations for AI systems that interact with humans, like chatbots, and for AI-generated content, like deepfakes. “The aim is to make it crystal clear that as users we are interacting with a machine,” said Vestager.
In addition to these new regulations, all AI systems classified as high-risk will also have to be indexed in a new EU-wide database. Daniel Leufer, a Europe policy analyst at Access Now, told The Verge that this was an unexpected and welcome addition to the proposal.
“It’s a really good measure, because one of the issues we have is just not knowing when a system is in use,” said Leufer. “For example, with ClearView AI we rely on investigative journalism and leaks to find out if anyone’s using it.” A database would provide “basic transparency about what systems are in use... enabling us to do our job.”
Digital rights groups said there were additional problems beyond the loopholes involving biometric identification. A hoped-for ban on the use of AI to automatically identify gender and sexual orientation was absent, for example, while algorithms used for predictive policing, which are frequently found to be affected by racial bias, will not have strong oversight.
“These are supposedly high risk [AI applications], but in terms of enforcement are self-assessment,” says Chander. “This allows companies who profit from the development of these systems to decide whether or not they conform with the law. It’s astounding.”
Despite such objections, many will see the EU as following through on a promise to create a “third way” between the US and China on AI policy. As Vestager noted at the press conference, the bloc wishes to distinguish itself with fair and ethical applications of AI, and this proposal is still the biggest step toward regulation in line with those values.
“I’m a little bit suspicious,” Geese said, “but I’m very happy that the EU is trying to be a global standard setter.”