
Friend or Enemy? Pros and Cons of the New AI Act in Europe
Back in January 2025 I published this article about the new Artificial Intelligence Act of the European Union. At that time, the law had already entered into force in August 2024, but the first practical obligations had not yet come into effect. Today, in September 2025, we have already spent half a year living with the first measures, such as the ban on unacceptable-risk systems that started in February.
I am bringing back the original text because I believe it is still a useful starting point to understand the debate. With the experience we have gained in these months, it is now easier to see which aspects of the law have helped to build trust and adoption of AI, and which areas are proving more challenging in practice.
Original text (January 2025)
The European Union has taken a significant step by approving for the first time a law on Artificial Intelligence (AI), aimed at regulating its development and use. This legislation seeks to balance technological innovation with the protection of fundamental rights and the safety of citizens.
As usual, critics of the law have appeared.
And indeed, no law is perfect. This one is not either. Just as Artificial Intelligence and its uses will evolve, this law will also need to adapt over time.
In this article we review the most common arguments for and against this new regulation, which comes into force this FEBRUARY 2025:
PROS
- Protection of Fundamental Rights: The law ensures that AI systems respect fundamental rights, democracy, and the rule of law.
- Consumer Protection: Consumers and citizens are better protected against potential abuses and risks associated with AI.
- Transparency and Traceability: The law requires AI systems to be transparent and traceable, which increases trust in these technologies.
- Risk-Based Regulation: It classifies AI systems into different risk categories, allowing for more specific regulation adapted to the changing uses of technology.
- Building Trust: The law promotes trust in AI by ensuring that systems are safe and respect users’ rights.
- High Standards: The law sets high standards for the development and use of AI, which should improve the quality and reliability of AI systems. Also, as the first law at a global level, it will serve as a reference for other countries that will legislate afterwards. As happened with the GDPR (the Brussels Effect), and as already happened with the law approved in South Korea, the “Basic Law on AI Development and Trust-Based Establishment,” which shares a very similar approach to the European law.
- Innovation and Competitiveness: The law is expected to boost innovation and position Europe as a leader in the field of AI.
- The idea is that by establishing a stable legal framework, trust in AI will increase, which in turn will drive adoption by users and expand the market. And a larger market, together with clear and stable rules, makes investments more attractive for companies.
CONS
- Possible Obstacle to Innovation: Some critics argue that the law could hinder the development and adoption of AI in Europe due to strict regulations.
- Counterargument: the purpose of the law is to protect citizens and uphold fundamental rights. At the same time, it aims to foster AI innovation in Europe by providing a clear and stable legal framework that ensures investment security. The law also includes the creation of regulatory sandboxes, which are controlled environments where companies can test AI products and services.
- Compliance Costs: Companies, especially small and medium-sized enterprises, could face high costs to meet the requirements of the law.
- Counterargument: complying with any regulation requires extra effort. This is always the case. But if companies are aware of responsibilities and obligations before starting AI development, incorporating the law does not necessarily have to be especially costly. Knowledge of the law is key here.
- Implementation Challenges: Implementing the law may be complex and vary across Member States, potentially leading to inconsistencies.
- Counterargument: The law is the same across all of Europe. A two-level governance system will be established. National authorities will be responsible for supervising and enforcing rules related to AI systems, while the EU will regulate general-purpose AI models.
- Bureaucracy: The law may increase bureaucracy and administrative processes for companies.
- Counterargument: Some requirements and obligations must indeed be met. For example, high-risk systems must provide information and pass a conformity assessment, and transparency obligations apply to other AI systems.
- Unequal Competition: Companies outside the EU may not be subject to the same regulations, potentially creating a competitive disadvantage for European companies.
- Counterargument: In fact, it is quite the opposite. The law applies to the use of any AI system within European territory. This means that a foreign provider, say Meta, must comply with the European law to use its AI in Europe.
- Technological Adaptation: Companies may face challenges in adapting existing technologies to the new legal requirements.
- Counterargument: the clock is already ticking. Starting FEBRUARY 2025, the first restrictions of the law (for unacceptable-risk AI systems) will begin to apply, and the rest of the rules will gradually follow.
Final reflection
Most AI systems currently used by companies are considered by the law as Minimal Risk and will not have any type of restriction or obligation.
BUT the first thing any company or organization will need to do is audit their current AI systems in order to classify them according to the law’s risk scale and identify those that will require some kind of restriction or obligation.
Author: Carles Gómara