Your EU AI Act Summary: 8 things leaders need to know

| minute read

The EU’s AI Act officially came into force on 1 August 2024, marking a world first in terms of the regulation of AI technology.  

To help you understand what this means for your business, we’ve combed through the documentation to bring you a high-level summary of what the regulations contain and what business leaders need to know. 

1. The AI Act classifies AI according to its risk levels  

The new regulations break down AI applications by the level of risk they pose to public safety, fundamental rights, or the environment. This risk is divided into four categories:  

  • Unacceptable risk. This category includes AI systems used for manipulative, exploitative, or social control practices that seriously threaten citizens' rights. Examples include social scoring systems and AI systems that generate facial recognition databases through untargeted scraping of facial images. Under the AI Act, all systems deemed to pose an unacceptable risk are strictly prohibited.
  • High risk. These systems face the most rigorous regulations because they pose significant potential harm to our fundamental rights. Any AI system that profiles individuals is considered high-risk – for instance, systems that use AI to evaluate credit worthiness or to allocate working tasks based on personal traits.
  • Limited risk. These are systems whose risks are associated with a lack of transparency in AI usage and are unlikely to cause significant harm. Chatbots are one example; the regulations stipulate that users must be informed they are interacting with AI. However, the rules are less stringent because such systems are unlikely to have harmful consequences.
  • Minimal risk. These AI systems do not fall within the previously mentioned classifications. They are deemed free to use and are not regulated, such as an AI-enabled spam filter.

2. Changes to this classification are expected with the rise of generative AI  

Although the AI Act is already in force, big changes could soon be underway thanks to the explosion of generative AI (GenAI) in the years since the legislation was first drafted.  

In 2021, when much of the work on the regulations took place, most AI applications available within the EU single market fell under the “minimal risk” category and were therefore expected to go unregulated.  

By contrast, in 2024, GenAI is now the most-used type of AI by organisations, and it has many potentially dangerous uses. This will not only make updates to the legislation necessary but potentially expand the scope of the regulation itself. 

3. There are special rules for General Purpose AI (GPAI) providers 

The EU AI Act has specific rules for “general purpose” AI (or GPAI), which is any AI model or system with enough generality to perform a wide range of tasks for which it was not specifically designed. ChatGPT or Gemini by Google are two examples. 

Under the new legislation, all GPAI model providers must: 

  • Provide technical documentation, including the training and testing process for the model 

  • Give instructions for its use and how to integrate it into AI systems 

  • Comply with the Copyright Directive 

  • Publish a summary of the content used for training  

Free and open-license GPAI providers must only comply with the last two bullet points, unless they present a systemic risk, in which case they must also: 

  • Conduct model evaluations  

  • Carry out adversarial testing to assess and mitigate systemic risks 

  • Track, document, and report serious incidents and corrective measures to the AI Office and any relevant national authorities 

  • Put cybersecurity protections in place 

4. Like GDPR, the AI Act will apply even if your business is based outside the EU 

There are many similarities between the AI Act and the EU’s General Data Protection Regulation (GDPR), which came into effect in 2016. Both regulations are the first of their kind around the world in protecting citizens from the potential ramifications of technological development.  

There are also important regulatory similarities. Just as GDPR applies to any business that processes the personal data of people in the EU, regardless of where that business is established, the same is true of the AI Act.  

Don’t risk noncompliance just because you’re based outside the EU; the penalties could be huge. 

5. Penalties for non-compliance range up to €35 million 

For companies that fall foul of the new regulations, fines can be severe. Any company that breaks the rules around prohibited AI could face fines of up to €35 million, or up to 7% of its total worldwide annual turnover for the preceding financial year – whichever is higher. 

Noncompliance with other areas of the AI Act will occasion fines of up to €15 million, or 3% of turnover. 

There are also fines for supplying incorrect, incomplete, or misleading information to the authorities enforcing the regulations: up to €7.5 million or 1% of worldwide annual turnover. 

6.  Developers of AI systems face many obligations – but they’re not the only ones  

In the terminology of the AI Act, AI “providers” are natural or legal persons, public authorities, agencies, or other bodies that develop an AI system or a general-purpose AI model. “Deployers” are the natural or legal persons that deploy the AI system professionally – for instance, a business using it as part of its service.

Many of the AI Act’s provisions apply to developers putting high-risk systems onto the market, but as an organisation, you’re still responsible for providing the proper human oversight, such as carrying out due diligence and keeping your clients informed.  

7. Deadlines for compliance begin in 2025… 

So, how long do you have before your AI activities must be compliant with EU law? Here’s a brief timeline: 

  • Prohibited systems must be offline by February 2025 

  • Rules for the GPAI start to apply as of August 2025

  • Rules for high-risk AI systems under Annex III (including biometrics, safety components for digital infrastructure, and recruitment) will start to apply in August 2026

  • Rules for high-risk AI systems under Annex I and large-scale IT systems under Annex X will start to apply in August 2027

8. … But the AI Act offers a grace period for achieving compliance 

The above timeline represents when each set of AI Act rules starts to apply, depending on your AI system classification. However, the deadline for compliance varies according to the AI system classification, and the law grants a period to achieve compliance. For example, providers of GPAI models placed on the market before 2 August 2025 must comply with the AI Act by 2 August 2027.

For example, let’s say you’ve developed an AI model for recruitment screening – considered “high risk” under Annex III – and you launch it in June 2026. You have until June 2028 to make it fully compliant with the law, whereas, if you launched it two months later, it would need to be compliant on launch day. 

This could lead to a rush of half-baked AI products launching in the next few years. Don’t be one of them: build compliance into your AI applications from the very beginning. 

To find out how to build AI responsibly and with maximum business impact, read our report. 

 

Discover why AI is nothing without you

At Sopra Steria, we believe AI’s true potential is unlocked with human collaboration. By blending human creativity with advanced AI technology, we empower people to address society’s most pressing challenges—from combating disease to mitigating climate change—while helping our clients achieve their digital transformation goals.

We emphasize critical thinking and education to ensure AI upholds core human values like respect and fairness, minimizing ethical risks. Together, we’ll create a future where AI inspires positive impact and enhances human brilliance. That's why we believe that AI is nothing without you!

DISCOVER MORE

Search

artificial-intelligence

Related content

Responsible artificial intelligence

As organisations race to seize AI’s benefits, prioritising responsibility is key. Embracing responsible AI practices is not just about staying ahead but building a sustainable competitive advantage. 

AI and cybersecurity

In today’s digital age, traditional cybersecurity measures are no longer sufficient. Cyber threats are evolving rapidly, and adopting innovative solutions is essential to protect your business. Discover how AI is revolutionizing cybersecurity and giving you a strategic edge. 

Digital Banking Experience Report 2023 The AI-enabled banking era

Banks must leverage their trust capital if they are not to lose market share to tech giants broadening their offer into financial services. Our Digital Banking Experience Report 2023 outlines the key trends globally shaping banking in the hyper-connected era.