EU AI Act

EU AI Act Explained in Plain English

By Daman David Pant May 2026 10 min read

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It applies to any organisation that develops, deploys, imports, or distributes AI systems that affect people in the European Union: regardless of where the organisation is based.

If you're preparing for the AIGP exam, the EU AI Act is one of the most heavily tested topics. Here's what you need to know, in plain language.

The Core Idea: Risk-Based Regulation

The EU AI Act doesn't ban most AI: it classifies AI systems by risk level and applies proportionate obligations. The higher the potential harm, the stricter the rules. There are four tiers.

Tier 1: Unacceptable Risk
Prohibited AI Practices

Banned outright. These systems pose an unacceptable threat to fundamental rights and human dignity.

Tier 2: High Risk
Regulated: Conformity Assessment Required

Permitted but subject to strict obligations before they can be deployed. Must be registered in an EU database.

Tier 3: Limited Risk
Transparency Obligations Only

Minimal requirements: mainly transparency (e.g. telling users they're interacting with an AI).

Tier 4: Minimal Risk
No Specific Obligations

Spam filters, AI in video games, recommendation engines. No regulation required under the Act.

Prohibited AI Practices (Tier 1)

These are banned from August 2026 onwards. They include:

Exam tip: A common trap is classifying social scoring as "high-risk" rather than "prohibited." It is banned outright: not regulated with extra requirements.

High-Risk AI Systems (Tier 2)

High-risk AI systems can be deployed but require substantial compliance work. High-risk categories include:

Obligations for High-Risk Systems

RequirementWhat It Means in Practice
Risk management systemOngoing identification and mitigation of risks throughout the lifecycle
Data governanceTraining data must be relevant, representative, and free from bias
Technical documentationFull documentation before deployment and kept up to date
Transparency & instructionsUsers must be given clear information about the system's purpose and limitations
Human oversightHumans must be able to intervene and override AI decisions
Accuracy, robustness & cybersecuritySystems must meet performance standards throughout their lifecycle
Conformity assessmentThird-party or self-assessment to verify compliance before market placement
RegistrationMust be registered in the EU AI database before deployment

GPAI Models: A Separate Category

General Purpose AI (GPAI) models: like large language models: have their own set of rules under the Act. All GPAI model providers must:

GPAI models deemed to present systemic risk (trained with over 10^25 FLOPs) face additional requirements including adversarial testing and serious incident reporting.

Who Does the EU AI Act Apply To?

The Act has broad extraterritorial reach. It applies to:

This means a US company with no EU office is still subject to the Act if its AI affects EU citizens.

Timeline

DateWhat Comes Into Force
August 2024Act enters into force
February 2025Prohibited practices provisions apply
August 2025GPAI model obligations apply
August 2026High-risk AI system obligations apply (most provisions)
August 2027Obligations for certain high-risk systems in Annex I apply

What This Means for the AIGP Exam

The EU AI Act is one of the most tested areas in the AIGP exam. Focus on:

Test your EU AI Act knowledge

Practice with 200 scenario-based questions including a full EU AI Act domain: free, no payment needed.

Start Free Practice Quiz →