A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals”; a practice that potentially involves collecting varied data about citizens — everything from criminal records to their behavior on social media — and then using it to assess their moral or ethical integrity.
The recommendations are part of the EU’s ongoing efforts to establish itself as a leader in so-called “ethical AI.” Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and “human-centric” manner.
The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI. However, the paper is only a set of recommendations at this point, and not a blueprint for legislation.
Notably, the suggestions that the EU should ban AI-enabled mass scoring and limit mass surveillance are some of the report’s relatively few concrete recommendations. (Often, the report’s authors simply suggest that further investigation is needed in this or that area.)
Reports of China’s social credit system have worried experts in the West
The fear of AI-enabled mass-scoring has developed largely from reports about China’s nascent social credit system. This program is often presented as a dystopian tool that will give the Chinese government huge control over citizens’ behavior; allowing them to dole out punishments (like banning someone from traveling on high speed rail) in response to ideological infractions (like criticizing the Communist party on social media).
However, more recent, nuanced reporting suggests this system is less Orwellian than it seems. It’s split among dozens of pilot programs, with most focused on stamping out everyday corruption in Chinese society rather than punishing would-be thought crime.
Experts have also noted that similar systems of surveillance and punishment already exist in the West, but instead of being overseen by governments they’re run by private companies. With this additional context, it’s not clear what an EU-wide ban on “mass scoring” would constitute. Would it also cover the activities of insurance companies, creditors, or social media platforms, for example?
Elsewhere in today’s report, the EU’s experts suggest that citizens should not be “subject to unjustified personal, physical or mental tracking or identification” using AI. This might include using AI to identify emotions in someone’s voice or track their facial expressions, they suggest. But again, these are methods companies are already deploying, using them for tasks like tracking employee productivity. Should this activity be banned in the EU?
Uncertainty about the scope of the report’s recommendations is matched by criticism that such policy documents are, at this point, toothless.
Fanny Hidvegi, a member of the expert group that authored the report and a policy analyst at nonprofit Access Now, said the document was overly vague, lacking “clarity on safeguards, red lines, and enforcement mechanisms.” Others involved have criticized the EU’s process for being steered by corporate interests. Philosopher Thomas Metzinger, another member of the AI expert group, has pointed out how initial “red lines” on how AI shouldn’t be used have been dumbed down to mere “critical concerns.”
So while the EU may commission experts that tell it to ban AI mass surveillance and scoring, that doesn’t guarantee that legislation will be enacted that prevents against these harms.