The European Parliament took an important step in regulating technology, approving on March 13 the EU’s proposed Artificial Intelligence Act that aims to ensure safety and compliance with fundamental rights, while boosting innovation.
The regulation, which was agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions, strives to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact. It is now expected to be rubber stamped by the EU Council, becoming law soon.
“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” the EU Parliament’s Internal Market Committee co-rapporteur Brando Benifei from Italy said during the plenary debate on March 12. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected,” he argued, noting that the AI Office will now be set up to support companies to start complying with the rules before they enter into force. “We ensured that human beings and European values are at the very centre of AI’s development,” Benifei said.
The Parliament’s Civil Liberties Committee co-rapporteur Dragos Tudorache from Romania said the EU linked the concept of artificial intelligence to the fundamental values that form the basis of the societies. “However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice,” he said.
Banned applications
According to the European Parliament, the new rules ban certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing when it is based solely on profiling a person or assessing their characteristics, and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
The use of remote biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations, the EU Parliament said. “Real-time” RBI can only be deployed if strict safeguards are met, for example its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorization. Such uses may include, for example, a targeted search for a missing person or preventing a terrorist attack. Using such systems post-facto (post-remote RBI) is considered a high-risk use case, requiring judicial authorization being linked to a criminal offence, MEPs said.
Obligations for high-risk systems
Clear obligations are also foreseen for other high-risk AI systems due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law. Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services, for example healthcare, banking, certain systems in law enforcement, migration and border management, justice and democratic processes, for example influencing elections. Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.
Manipulated images and deepfakes
General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents, MEPs said.
Additionally, deepfakes – images, audio or video content that has been artificially generated or manipulated need to be clearly labelled as such.
Turning to measures to support SMEs, MEPs said regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.