Following a round of intense negotiations this week, lawmakers in Brussels have now reached a “provisional agreement” on the European Union’s proposed Artificial Intelligence Act (AI Act). The EU’s AI Act is anticipated to be the world’s first comprehensive set of rules to govern AI and could serve as a benchmark for other regions looking to pass similar laws.
According to the press release, negotiators established obligations for “high-impact” general-purpose AI (GPAI) systems that meet certain benchmarks, like risk assessments, adversarial testing, incident reports, and more. It also mandates transparency by those systems that include creating technical documents and “detailed summaries about the content used for training” — something companies like ChatGPT maker OpenAI have refused to do so far.
Another element is that citizens should have a right to launch complaints about AI systems and receive explanations about decisions on “high-risk” systems that impact their rights.
The press release didn’t go into detail about how all that would work or what the benchmarks are, but it did note a framework for fines if companies break the rules. They vary based on the violation and size of the company and can range from 35 million euros or 7 percent of global revenue, to 7.5 million euros or 1.5 percent of global revenue of turnover.
There are a number of applications where the use of AI is banned, like scraping facial images from CCTV footage, categorization based on “sensitive characteristics” like race, sexual orientation, religion, or political beliefs, emotion recognition at work or school, or the creation of “social scoring” systems. The last two banned bullet points are AI systems that “manipulate human behavior to circumvent their free will” or “exploit the vulnerabilities of people.” The rules also include a list of safeguards and exemptions for law enforcement use of biometric systems, either in real-time or to search for evidence in recordings.
It’s expected that a final deal will be reached before the end of the year. Even then, the law likely won’t come into force until 2025 at the earliest.
The first draft of the EU’s AI Act was unveiled in 2021, seeking to distinguish what actually counts as AI, and synchronize the rules for regulating AI technology across EU member states. That draft predated the introduction of fast-changing generative AI tools like ChatGPT and Stable Diffusion, however, prompting numerous revisions to the legislation.
Further negotiations will still be required to finalize some details before the AI Act comes into force.
Now that a provisional agreement has been reached, more negotiations will still be required, including votes by Parliament’s Internal Market and Civil Liberties committees.
Negotiations over rules regulating live biometrics monitoring (such as facial recognition) and “general-purpose” foundation AI models like OpenAI’s ChatGPT have been highly divisive. These were reportedly still being debated this week ahead of Friday’s announcement, causing the press conference announcing the agreement to be delayed.
EU lawmakers have pushed to completely ban the use of AI in biometric surveillance, but governments have sought exceptions for military, law enforcement, and national security. Late proposals from France, Germany, and Italy to allow makers of generative AI models to self-regulate are also believed to have contributed to the delays.