The European Union is considering banning the use of artificial intelligence for a number of purposes, including mass surveillance and social credit scores. This is according to a leaked proposal that is circulating online, first reported by Politico, ahead of an official announcement expected next week.
If the draft proposal is adopted, it would see the EU take a strong stance on certain applications of AI, setting it apart from the US and China. Some use cases would be policed in a manner similar to the EU’s regulation of digital privacy under GDPR legislation.
Member states, for example, would be required to set up assessment boards to test and validate high-risk AI systems. And companies that develop or sell prohibited AI software in the EU — including those based elsewhere in the world — could be fined up to 4 percent of their global revenue.
According to a copy of the draft seen by The Verge, the draft regulations include:
- A ban on AI for “indiscriminate surveillance,” including systems that directly track individuals in physical environments or aggregate data from other sources
- A ban on AI systems that create social credit scores, which means judging someone’s trustworthiness based on social behavior or predicted personality traits
- Special authorization for using “remote biometric identification systems” like facial recognition in public spaces
- Notifications required when people are interacting with an AI system, unless this is “obvious from the circumstances and the context of use”
- New oversight for “high-risk” AI systems, including those that pose a direct threat to safety, like self-driving cars, and those that have a high chance of affecting someone’s livelihood, like those used for job hiring, judiciary decisions, and credit scoring
- Assessment for high-risk systems before they’re put into service, including making sure these systems are explicable to human overseers and that they’re trained on “high quality” datasets tested for bias
- The creation of a “European Artificial Intelligence Board,” consisting of representatives from every nation-state, to help the commission decide which AI systems count as “high-risk” and to recommend changes to prohibitions
Perhaps the most important section of the document is Article 4, which prohibits certain uses of AI, including mass surveillance and social credit scores. Reactions to the draft from digital rights groups and policy experts, though, say this section needs to be improved.
“The descriptions of AI systems to be prohibited are vague, and full of language that is unclear and would create serious room for loopholes,” Daniel Leufer, Europe policy analyst at Access Now, told The Verge. That section, he says, is “far from ideal.”
Leufer notes that a prohibition on systems that cause people to “behave, form an opinion or take a decision to their detriment” is unhelpfully vague. How exactly would national laws decide if a decision was to someone’s detriment or not? On the other hand, says Leufer, the prohibition against AI for mass surveillance is “far too lenient.” He adds that the prohibition on AI social credit systems based on “trustworthiness” is also defined too narrowly. Social credit systems don’t have to assess whether someone is trustworthy to decide things like their eligibility for welfare benefits.
On Twitter, Omer Tene, vice president of nonprofit IAPP (The International Association of Privacy Professionals), commented that the regulation “represents the typical Brussels approach to new tech and innovation. When in doubt, regulate.” If the proposals are passed, said Tene, it will create a “vast regulatory ecosystem,” which would draw in not only the creators of AI systems, but also importers, distributors, and users, and create a number of regulatory boards, both national and EU-wide.
This ecosystem, though, wouldn’t primarily be about restraining “big tech,” says Michael Veale, a lecturer in digital rights and regulations at University College London. “In its sights are primarily the lesser known vendors of business and decision tools, whose work often slips without scrutiny by either regulators or their own clients,” Veale tells The Verge. “Few tears will be lost over laws ensuring that the few AI companies that sell safety-critical systems or systems for hiring, firing, education and policing do so to high standards. Perhaps more interestingly, this regime would regulate buyers of these tools, for example to ensure there is sufficiently authoritative human oversight.”
It’s not known what changes might have been made to this draft proposal as EU policymakers prepare for the official announcement on April 21st. Once the regulation has been proposed, though, it will be subject to changes following feedback from MEPs and will have to be implemented separately in each nation-state.
Update April 14th, 11:03AM ET: Updated story with additional comment from Michael Veale.