The EU officially implemented its new landmark AI law on February 2, aiming to ensure safety and ethical use of Artificial Intelligence. The European Artificial Intelligence Act (AI Act), the world’s first comprehensive regulation on artificial intelligence, which is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights, was agreed in negotiations with member states in December 2023 and formally entered into force in August 2024.
Europe and France strive to become artificial intelligence powerhouses with French President Emmanuel Macron announcing at the Paris AI Summit on February 11 €109 billion of investments for France for the years to come, adding that a European AI strategy would be a unique opportunity for Europe to accelerate in Artificial Intelligence. Many observers compare Macron’s announcement to the $500 billion “Stargate Project” announcement, assembling key U.S. tech giants and the U.S. government for AI research and commercialization, by U.S. President Donald Trump in his first week in office.
On February 4, the European Commission published official Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act, which aims to promote innovation while ensuring high levels of health, safety, and fundamental rights protection, classifies AI systems into different risk categories, including prohibited, high-risk, and those subject to transparency obligations. The guidelines, which are not legally binding, specifically address practices such as harmful manipulation, social scoring, and real-time remote biometric identification, among others. The EU Commission has approved the draft guidelines, but not yet formally adopted them.
The efforts by the EU to ensure safe and ethical use of AI technologies, have caused a fiery reaction from Big Tech companies supported by, and supportive of, Trump who regularly threatens retaliation in response to any EU penalties imposed on U.S. tech companies and other firms. Trump, of course, is already planning to levy fresh tariffs on the EU in the coming months after U.S. Government agencies deliver him a detailed report examining EU trade practices.
The AI regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment.
The AI Act introduces a forward-looking definition of AI, based on a product safety and risk-based approach in the EU. The AI Act bans certain applications of AI which it deems as posing “unacceptable risk” to citizens and establishes fines for violations of the rules. Penalties are based on a percentage of the company’s global turnover from the previous year or a set amount, whichever is higher. SMEs and start-ups face proportional fines.
According to the EU, the AI Act aims to ensure that AI systems are developed and used responsibly. The rules impose obligations on providers and deployers of AI technologies and regulate the authorization of artificial intelligence systems in the EU single market. The law addresses risks linked to AI, such as bias, discrimination and accountability gaps, promotes innovation and encourages the uptake of AI.
The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights. As the world’s first law regulating AI, the EU’s rules could set a global standard in AI regulation, just as the general data protection regulation (GDPR) has done for data privacy, promoting ethical, safe, and trustworthy artificial intelligence worldwide.
The AI Act addresses the risks associated with specific uses of AI, categorising them into four levels of risk and establishing different rules accordingly.
The AI Act’s four risk levels and their corresponding rules are:
Minimal or no risks
Most AI systems do not pose risks. AI-powered games or spam filters can be used freely. They are not regulated or affected by the EU’s AI Act.
Limited risks
AI systems that present only limited risks, such as chatbots or AI systems that generate content, are subject to transparency obligations, such as informing users that their content was generated by AI so that they can make informed decisions concerning further use.
High risks
High-risk AI systems, such as those used in disease diagnoses, autonomous driving and the biometric identification of individuals involved in criminal activities or investigations, must meet strict requirements and obligations to gain access to the EU market. These include rigorous testing, transparency and human supervision.
Unacceptable risks
AI systems that pose a threat to people’s safety, rights or livelihoods are banned from use in the EU. These include cognitive behavioural manipulation, predictive policing, emotion recognition in the workplace and educational institutions, and social scoring. The use of real-time remote biometric identification systems such as facial recognition by law enforcement authorities in public spaces is also prohibited, with some limited exceptions.
Support for innovation
The AI Act’s objectives are not only to enhance effective enforcement of existing law on fundamental rights and safety. It also aims to promote investment and innovation in AI within the EU, and to facilitate the development of a single market for AI applications, the EU said.
Accordingly, the rules include further provisions to support AI innovation in the EU. This also goes hand in hand with other initiatives, including the EU’s coordinated plan on artificial intelligence which aims to accelerate investment in AI across Europe.

In October 2020, the European Council discussed the digital transition. In relation to AI, EU leaders invited the Commission to propose ways to increase European and national public and private investments in artificial intelligence research, innovation and deployment; ensure better coordination and more networks and synergies between European research centers based on excellence; provide a clear, objective definition of high-risk artificial intelligence systems.
In April 2021, the Commission released a proposal for a regulation aiming to harmonize rules on artificial intelligence (AI Act) and a coordinated plan which includes a set of joint actions for the Commission and member states. This package of rules aims to improve trust in artificial intelligence and foster the development and updating of AI technology.