Core Policy Guidance

Top 10 Actions for EU AI Act Compliance

The essential checklist for companies starting their AI Act compliance journey. Actionable steps you can implement immediately to build regulatory confidence.

EU AI Act Compliance Checklist

Top 10 Actions for EU AI Act Compliance

Small and mid-sized businesses should proactively prepare for AI Act compliance and safe AI adoption. These ten actions represent the most critical steps you can take now.

1. Map Your AI Uses

Create a comprehensive inventory of all AI and automated tools used internally or in your products.

What to Document:

  • Tool name and vendor
  • Purpose and business function
  • Types of data processed
  • Who uses it and how
  • Integration points with other systems

This inventory becomes your "AI Use Case Register" – a living document that compliance teams, auditors, and leadership can reference. Even simple tools like spam filters or analytics should be listed, as they set a baseline for what's minimal risk versus what might need closer attention.

2. Classify Risk Levels

For each use case in your inventory, determine its risk category under the EU AI Act framework.

Risk Categories:

  • Minimal Risk: Most everyday AI tools (spam filters, recommendation engines, basic analytics)
  • Transparency Required: Chatbots, AI-generated content, emotion recognition systems
  • High-Risk: AI used in hiring, creditworthiness assessment, law enforcement, medical devices, critical infrastructure
  • Unacceptable/Prohibited: Social scoring, manipulative AI, real-time biometric surveillance in public spaces

Understanding where each system falls helps you prioritize compliance efforts and allocate resources appropriately.

3. Check for Prohibited Practices

Ensure none of your AI uses fall into banned categories. The AI Act prohibits:

  • Social scoring systems that evaluate individuals based on behavior or personal characteristics
  • AI that exploits vulnerabilities of specific groups (children, persons with disabilities)
  • Subliminal manipulation that causes harm
  • Real-time remote biometric identification in publicly accessible spaces (with limited law enforcement exceptions)

If any current or planned AI use approaches these areas, immediately plan to stop or fundamentally redesign that application.

4. Apply Transparency Requirements

Identify any AI that interacts with people or creates content, then implement clear disclosures.

Required Disclosures:

  • Chatbots: Users must be informed they're interacting with AI
  • Emotion Recognition: Clear notice required before deployment
  • Biometric Categorization: Transparency obligations apply
  • AI-Generated Content: Must be labeled as artificially generated or manipulated

These transparency requirements apply even to minimal-risk systems when they interact directly with users.

5. Prepare for High-Risk Obligations

If you plan to use or offer AI in sensitive areas, anticipate strict requirements:

High-Risk System Requirements:

  • Risk management systems throughout the AI lifecycle
  • Data governance and quality standards
  • Technical documentation and record-keeping
  • Transparency and user information requirements
  • Human oversight mechanisms
  • Cybersecurity and accuracy standards

For high-risk systems from vendors, verify they have CE marking and meet EU requirements before deployment.

6. Strengthen Data & Privacy Protections

Review how personal data is used in AI systems:

  • Ensure lawful GDPR basis for all data processing
  • Conduct Data Protection Impact Assessments (DPIAs) for high-risk processing
  • Minimize data fed to AI systems
  • Implement processes for individuals to opt-out of automated decisions
  • Ensure human review is available for decisions significantly affecting individuals

7. Implement Security Measures

Treat AI tools as critical parts of your IT infrastructure:

  • Control and monitor access to AI systems
  • Prevent data leaks (don't paste confidential data into public AI tools)
  • Log AI activities for audit trails
  • Apply encryption where appropriate
  • Establish incident response procedures
  • Follow cybersecurity best practices aligned with NIS2 requirements

8. Vet Your AI Vendors

For third-party AI services, perform thorough due diligence:

Vendor Assessment Checklist:

  • GDPR-compliant data processing terms
  • Transparency about training data and known risks
  • Commitment to EU AI Act standards
  • Available audit or certification information
  • Security and access control measures
  • Incident notification procedures
  • Contractual liability provisions

9. Train and Guide Employees

Develop an internal AI usage policy and training program:

  • Create clear guidelines on approved vs. prohibited AI uses
  • Provide practical examples and scenarios
  • Establish approval workflows for new AI deployments
  • Train staff on AI literacy (as mandated by the AI Act)
  • Set rules for handling sensitive data with AI
  • Create feedback channels for reporting AI issues

10. Monitor and Adapt

Establish ongoing monitoring systems:

  • Track AI outputs and performance
  • Enable feedback channels for employees and customers
  • Conduct periodic reviews of AI use cases
  • Update controls as laws evolve
  • Stay informed about new standards and codes of conduct
  • Document changes and improvements

Timeline and Priorities

The AI Act's core provisions phase in between 2025 and 2027:

  • February 2025: Prohibited practices ban takes effect
  • August 2025: General-purpose AI obligations begin
  • August 2026: Most high-risk system requirements apply
  • August 2027: High-risk systems already in use must comply

Organizations have a window to build robust yet proportionate AI governance. Early preparation avoids legal pitfalls and enhances trustworthiness.

Why Act Now?

Non-compliance carries significant penalties – up to €35 million or 7% of global annual turnover for serious violations. Beyond avoiding fines, proactive compliance:

  • Builds customer trust
  • Improves AI system quality
  • Reduces operational risks
  • Positions your organization as a responsible AI user
  • Prepares you for customer and partner due diligence

By implementing these ten actions, organizations stay ahead of compliance deadlines while confidently leveraging AI in day-to-day operations.

Ready to Take the Next Step?

Get the comprehensive guide or generate a customized AI policy for your organization.

Download Free Guide

118 pages + templates

Get the comprehensive EU AI Act compliance guide with actionable steps, risk frameworks, and ready-to-use templates.

Generate AI Policy

Customized for you

Create a professional, customized AI usage policy tailored to your organization's needs in minutes.

Both resources are designed specifically for mid-sized EU companies navigating AI governance.