Home ›
Blog › EU AI Act Compliance Checklist
EU AI Act
EU AI Act Compliance Checklist 2026
By Daman David Pant
May 2026
11 min read
The EU AI Act is now in force. Whether you are a provider building AI systems or a deployer putting them into use, you have legal obligations that apply now or will apply within months. This checklist gives you a structured, actionable view of what you need to do, organised by role and risk tier.
How to use this checklist: First identify your role (provider, deployer, or both). Then identify the risk tier of each AI system you are responsible for. The obligations that apply to you depend on both.
Step 1: Identify Your Role
| Role | Definition |
| Provider | Develops or places an AI system on the market or puts it into service, including via APIs |
| Deployer | Uses an AI system under its own authority in a professional context |
| Importer | Places on the EU market an AI system from a third country |
| Distributor | Makes an AI system available on the EU market without modifying it |
Note that one organisation can hold multiple roles simultaneously. A company that builds and deploys its own AI system is both a provider and a deployer.
Step 2: Classify Your AI Systems
Prohibited AI Practices
- Confirm none of your systems use subliminal manipulation techniques
- Confirm none exploit vulnerabilities of specific groups (age, disability)
- Confirm no real-time remote biometric identification in public spaces (except narrow law enforcement exceptions)
- Confirm no social scoring by public authorities for general purposes
- Confirm no AI-based profiling to predict criminal offences
- Confirm no untargeted scraping of facial images for recognition databases
- Confirm no emotion recognition in workplace or educational settings
High-Risk System Identification
Check whether any of your systems fall into these categories:
- Biometric identification or categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training (access, evaluation, assessment)
- Employment, worker management, and access to self-employment
- Access to essential private and public services (credit scoring, benefits)
- Law enforcement, migration, asylum, or border control
- Administration of justice and democratic processes
- Safety components of products covered by EU product safety legislation
Step 3: Provider Obligations for High-Risk Systems
Risk Management System Provider
- Establish a documented risk management system for the AI system
- Identify and analyse known and reasonably foreseeable risks
- Estimate and evaluate risks that may emerge in normal use and reasonably foreseeable misuse
- Adopt risk mitigation measures and document them
- Review and update the risk management system post-deployment
Data Governance Provider
- Apply data governance practices covering design choices, data collection, and preparation
- Ensure training, validation, and testing datasets meet quality criteria
- Document the origin, scope, and characteristics of datasets used
- Implement measures to detect and address biases in datasets
- Ensure datasets are relevant, representative, and free from errors where possible
Technical Documentation Provider
- Prepare technical documentation before placing the system on the market (Article 11)
- Include general description of the AI system and its intended purpose
- Document system components, algorithms, and logic
- Include training methodologies and datasets used
- Document validation and testing procedures and results
- Keep documentation updated throughout the system lifecycle
Transparency and Instructions Provider
- Provide instructions for use to deployers in a clear, understandable format
- Include information on the intended purpose, performance, and limitations
- Disclose the level of accuracy, robustness, and cybersecurity
- Describe human oversight measures required
- Provide information on foreseeable misuse and risks
Human Oversight Provider
- Design the system to allow human oversight during operation
- Enable operators to understand and monitor system outputs
- Include ability to override, interrupt, or shut down the system
- Ensure outputs are interpretable by the natural persons responsible
Registration and Conformity Provider
- Register the high-risk AI system in the EU database before placing on the market
- Complete the required conformity assessment procedure
- Draw up an EU declaration of conformity
- Affix the CE marking where applicable
- Appoint an authorised representative in the EU if established outside the EU
Step 4: Deployer Obligations for High-Risk Systems
Use in Accordance with Instructions Deployer
- Use the AI system only for its intended purpose as documented by the provider
- Follow the instructions for use provided by the provider
- Do not modify the system in ways that alter its risk profile without re-assessment
Human Oversight and Monitoring Deployer
- Assign human oversight to natural persons with competence, authority, and resources
- Monitor system operation for risks and unexpected outputs
- Inform the provider if you detect a serious incident or malfunctioning
- Suspend or recall the system if it presents unacceptable risk
Transparency to Affected Persons Deployer
- Inform natural persons when they are subject to a high-risk AI system decision
- Provide information on the logic involved and significance of the output where required
- Conduct a Fundamental Rights Impact Assessment (FRIA) for public body deployments
Logging and Record-Keeping Both
- Retain logs generated by the high-risk AI system for a minimum of 6 months
- Make logs available to competent authorities upon request
- Document post-market monitoring findings
Step 5: GPAI Model Obligations
If you provide a General Purpose AI (GPAI) model, additional obligations apply regardless of the downstream use case:
- Prepare and maintain technical documentation
- Make available information for downstream providers integrating your model
- Comply with copyright law and publish a summary of training data
- If the model has systemic risk (above 10^25 FLOPs training threshold): conduct adversarial testing, report serious incidents to the EU AI Office, and implement cybersecurity measures
Key Deadlines
Feb 2025
Prohibited AI practices provisions apply. Organisations must have eliminated any prohibited practices by this date.
Aug 2025
GPAI model obligations and governance provisions apply. Providers of GPAI models must comply with transparency and documentation requirements.
Aug 2026
High-risk AI system obligations under Annex I apply (products covered by existing EU product safety legislation).
Aug 2027
Full application of all high-risk AI system obligations under Annex III. All providers and deployers of high-risk systems must be fully compliant.
Test your EU AI Act knowledge
Practice AIGP exam questions on EU AI Act risk tiers, obligations, prohibited practices, and GPAI models.
Start Free Practice Quiz →