Practical checklist for EU AI Act compliance. Classify your AI system, identify your obligations, and track implementation.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI regulation. It applies to any company that deploys, develops, or imports AI systems used in the EU — regardless of where the company is based.
Most resources explain what the AI Act says. This checklist tells you what to do about it.
- Classify your AI system using the Risk Classification flowchart
- Check your role (provider, deployer, importer, distributor)
- Go through the checklist for your risk level
- Track implementation using the checkboxes
The AI Act defines 4 risk levels. Your obligations depend on your level.
Is your AI system...
|
┌─────────────┼──────────────┐
| | |
Subliminal Social Real-time
manipulation? scoring? biometric ID
| | in public?
| | |
v v v
┌──────────────────────────────────────┐
| UNACCEPTABLE RISK |
| Banned (Art. 5) - Do not deploy |
└──────────────────────────────────────┘
If none of the above, check Annex III...
┌──────────────────────────────────────┐
| HIGH RISK (Annex III) |
| Biometrics, critical infrastructure,|
| education, employment, essential |
| services, law enforcement, migration,|
| justice, democratic processes |
└──────────────────────────────────────┘
If not in Annex III...
┌──────────────────────────────────────┐
| LIMITED RISK |
| Chatbots, deepfakes, emotion |
| recognition, biometric |
| categorization |
└──────────────────────────────────────┘
If none of the above...
┌──────────────────────────────────────┐
| MINIMAL RISK |
| Spam filters, AI in games, |
| inventory management |
└──────────────────────────────────────┘
| Role | Definition | Key Obligation |
|---|---|---|
| Provider | Develops or puts AI system on market | Full compliance responsibility |
| Deployer | Uses AI system professionally | Transparency, monitoring, DPIA |
| Importer | Brings non-EU AI system to EU market | Verify provider compliance |
| Distributor | Makes AI system available (not provider/importer) | Verify CE marking |
Spam filters, recommendation engines, AI-powered search, inventory optimization
- No mandatory obligations (voluntary codes of conduct encouraged)
- Consider applying transparency best practices anyway
Chatbots, AI assistants, text/image generators, emotion recognition
Transparency Obligations (Art. 50):
- Disclosure: Users must be informed they are interacting with an AI system
- Chatbot identification: Clearly state "You are talking to an AI" at the start of interaction
- AI-generated content: Label content generated or manipulated by AI
- Deepfake labeling: Mark synthetic audio, video, images, and text as AI-generated
- Emotion recognition: Inform users when emotion recognition or biometric categorization is used
Implementation Checklist:
- Add AI disclosure to chatbot greetings / first message
- Implement "AI-generated" labels on all synthetic content
- Create user-facing AI usage policy
- Document how disclosures are presented to users
- Test that disclosures are visible and understandable
- Review disclosures quarterly for adequacy
HR/recruitment AI, credit scoring, medical AI, critical infrastructure, education assessment
Mandatory Requirements:
Risk Management (Art. 9):
- Establish a risk management system (continuous, iterative)
- Identify and analyze known and foreseeable risks
- Estimate and evaluate risks from intended use and misuse
- Adopt risk mitigation measures
- Test risk management measures for effectiveness
- Document residual risks and communicate to deployers
Data Governance (Art. 10):
- Training data must be relevant, representative, and free of errors
- Examine data for biases (especially for special categories)
- Implement data governance procedures (purpose, collection, preparation)
- Document data provenance, characteristics, and limitations
Technical Documentation (Art. 11):
- Prepare technical documentation BEFORE placing on market
- Include: system description, design, development, validation
- Include: risk management results
- Include: changes throughout lifecycle
- Keep documentation updated
Record-Keeping (Art. 12):
- Enable automatic logging of events (traceability)
- Logs must enable monitoring of operation
- Retain logs for appropriate period
Transparency (Art. 13):
- Provide clear instructions for use to deployers
- Include: capabilities, limitations, known risks
- Include: intended purpose and foreseeable misuse scenarios
- Include: human oversight measures
- Include: expected lifetime and maintenance needs
Human Oversight (Art. 14):
- Design for effective human oversight
- Enable human to understand AI system capabilities and limitations
- Enable human to correctly interpret AI output
- Enable human to decide not to use or override AI output
- Enable human to intervene or stop the AI system
Accuracy, Robustness, Cybersecurity (Art. 15):
- Achieve appropriate levels of accuracy for intended purpose
- Declare accuracy metrics in instructions for use
- Implement resilience against errors and inconsistencies
- Implement cybersecurity measures against manipulation
Conformity Assessment:
- Conduct conformity assessment (self-assessment or third-party)
- Prepare EU Declaration of Conformity
- Affix CE marking
- Register in EU database (Art. 71)
If you provide a general-purpose AI model (like an LLM):
All GPAI providers (Art. 53):
- Prepare technical documentation
- Provide information to downstream providers
- Implement copyright compliance policy
- Publish summary of training data
Systemic risk GPAI (Art. 55) — additional:
- Conduct model evaluations
- Assess and mitigate systemic risks
- Track and report serious incidents
- Ensure adequate cybersecurity
| Date | What Happens |
|---|---|
| Aug 1, 2024 | AI Act enters into force |
| Feb 2, 2025 | Banned AI practices (Art. 5) apply |
| Aug 2, 2025 | GPAI obligations apply |
| Aug 2, 2026 | Most obligations apply (including high-risk) |
| Aug 2, 2027 | High-risk AI in Annex I (regulated products) |
Risk Level: Limited Risk Key Actions: Add AI disclosure, label AI-generated responses Article: Art. 50
Risk Level: High Risk (Annex III, point 4) Key Actions: Full compliance (risk management, documentation, human oversight, conformity assessment) Articles: Art. 6, 9-15
Risk Level: Limited Risk Key Actions: Label AI-generated content Article: Art. 50
Risk Level: High Risk (Annex III, point 1) Key Actions: Full compliance + medical device regulation Articles: Art. 6, 9-15 + MDR
- EU AI Act full text - Official regulation
- AI Act compliance guide for Italian companies - Practical guide (Italian)
- What is Human-in-the-Loop? - Implementation guide
- SideMindBot - AI Act compliant chatbot platform with native Art. 50 disclosure
- Best AI chatbots for Italian companies 2026 - Comparison with compliance focus
Found an error or want to add a checklist item? Open an issue or PR. This is a living document that gets updated as AI Act implementing acts and guidelines are published.
MIT - See LICENSE
Made by Synaptica Solution - AI Governance & Process Automation
Disclaimer: This checklist is for informational purposes. It is not legal advice. Consult a qualified legal professional for compliance decisions.