Request Full Article

Success

Your details have been received.
We will be in touch.
Thank you.
Oops! Something went wrong while submitting the form.
AI
,  
10 minute read

The EU AI Act 2025: An Informative Guide

The EU AI Act 2025: An Informative Guide

What Is The EU AI Act?

In a manner of months, the world's first full-scale legal framework on artificial intelligence will be legitimised. The EU AI Act will be imposed in a bid to address the risks and opportunities of AI, stemming from a number of concerns: human rights, safety, health, the environment, legalities and democracy. Additionally, through the implementation of these regulations, the EU seeks to encourage innovation and growth in the competitive internal market for AI in the EU. (Power, K, 2024).  

The EU AI Act will be imposed upon businesses who create or use AI systems, and those who import, sell and/or distribute AI systems. This applies to those within the EU as well as developers, importers, deployers and distributors of AI systems outside of the EU on the premise that their output occurs within the EU. (Power, K, 2024).

This Act classifies AI systems according to risk. Those exempt include AI systems that are exclusively for military, defence or matters concerning national security. (Power, K, 2024).  

When Will the EU AI Act be Implemented?

Member states will begin to feel the effects of this act on August 2nd, 2025, whereby that date signifies the time when they must designate national authorities to oversee the application of rules and carry out market surveillance. To oversee this process, a panel of independent experts will offer enforcement advice to those affected. Although the EU AI Act will commence in August of 2025, the majority of its rules will be applied on August 2nd, 2026, an entire year later. It is important to note, however, that prohibitions of AI systems that are classified as presenting ‘unacceptable risk’, as outlined below, will face repercussions in early 2026, six months after the Act’s official implementation. (European Commission, 2024).  

How Are Risks Classified?

The potential risks are classified by the (European Commission, 2024) as follows:  

Minimal Risk:  

Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems face no obligations under the AI Act due to their minimal risk to citizens' rights and safety. Companies can voluntarily adopt additional codes of conduct.

Specific Transparency Risk:  

AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format and detectable as artificially generated or manipulated.

High Risk:  

AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high-quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include for example AI systems used for recruitment, to assess whether somebody is entitled to get a loan, or to run autonomous robots.

Unacceptable Risk:  

AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example, emotion recognition systems used at the workplace and some systems for categorising people or real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

What Are the Repercussions of Non-Compliance?

For businesses that choose not to comply with the aforementioned rules, fines of varying amounts, with reference to relevant global annual turnover, will apply. ‘Fines could go up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations and up to 1.5% for supplying incorrect information.’ (European Commission, 2024).

Interested in reading more?

Click here to request the full article.
The EU AI Act 2025: An Informative Guide
Clear Strategy
Company
At Clear Strategy, our passion is Data.
AI's Energy Dilemma: Making AI Sustainable for Business
AI's Energy Dilemma: Making AI Sustainable for Business
Explore how AI's growing energy consumption can impact sustainability and business. Learn how to balance AI advancements and improve sustainability in our guide
AI
10 minute read
What Is Reference Data Management?
What Is Reference Data Management?
Discover the basics of Reference Data Management and why it's essential for businesses. Get the answers you need now!
Data Governance
10 minute read

Big Data Moves Fast.
Don't Wait.

We work with large organisations and businesses to unlock their potential through Data, Analytics and AI.
Get Started