Regulatory Updates

Understanding the €35M Penalty: What Triggers High Fines Under the EU AI Act

The EU AI Act introduces penalties up to €35 million or 7% of turnover. Learn what violations carry the highest fines and how to avoid them.

EU AI Act Penalty Structure

Understanding the €35M Penalty: What Triggers High Fines Under the EU AI Act

The EU AI Act introduces one of the most stringent penalty frameworks for technology regulation. Understanding what triggers these fines – and how to avoid them – is essential for any organization using AI.

The Penalty Structure

The AI Act establishes a tiered penalty system based on violation severity:

Tier 1: Up to €35 million or 7% of global annual turnover

Applies to the most serious violations:

  • Deploying prohibited AI systems
  • Using AI for banned practices (social scoring, manipulation, unlawful surveillance)
  • Non-compliance with prohibited practice provisions

Tier 2: Up to €15 million or 3% of global annual turnover

Applies to violations of high-risk AI obligations:

  • Failing to meet requirements for high-risk systems
  • Non-compliance with data governance standards
  • Inadequate human oversight
  • Insufficient accuracy and robustness measures
  • Lack of required documentation

Tier 3: Up to €7.5 million or 1.5% of global annual turnover

Applies to:

  • Supplying incorrect, incomplete, or misleading information to authorities
  • Non-compliance with transparency obligations
  • Failure to provide requested documentation

Note: For SMEs, the penalty is the lower of the stated amount or the specified percentage of turnover.

What Triggers the Highest Fines?

Prohibited AI Practices (€35M Tier)

The most severe penalties target AI systems that the EU has deemed unacceptable:

1. Social Scoring Systems

AI that evaluates individuals based on social behavior, personal characteristics, or predicted future behavior – particularly when used by public authorities or on their behalf.

Example Violations:

  • A government agency deploying AI to score citizens based on social media activity
  • Private companies creating "citizen scores" that affect access to services
  • Systems that track and evaluate personal behavior for punitive purposes

2. Manipulative AI

Systems that deploy subliminal techniques or exploit vulnerabilities to materially distort behavior in ways that cause significant harm.

Example Violations:

  • AI-powered apps that manipulate children's behavior to make purchases
  • Systems exploiting elderly users' cognitive vulnerabilities
  • Subliminal messaging through AI-driven interfaces

3. Biometric Surveillance

Real-time remote biometric identification in publicly accessible spaces (with narrow law enforcement exceptions).

Example Violations:

  • Deploying facial recognition in shopping centers without authorization
  • Continuous biometric monitoring of employees in public-facing areas
  • Mass surveillance systems in public transportation

4. Biometric Categorization

Using biometric data to deduce sensitive attributes (race, political opinions, sexual orientation) except for specific authorized uses.

High-Risk System Violations (€15M Tier)

The second penalty tier targets failures in managing high-risk AI systems:

Risk Management Failures

  • No documented risk assessment process
  • Failure to implement risk mitigation measures
  • Inadequate testing before deployment
  • Lack of post-market monitoring

Data Governance Deficiencies

  • Using biased or incomplete training data
  • Insufficient data quality controls
  • Inadequate documentation of data sources
  • Failure to address known data limitations

Human Oversight Gaps

  • Deploying high-risk AI without human supervision capability
  • Inadequate training for human overseers
  • No mechanism for humans to intervene or override decisions
  • Insufficient information provided to human overseers

Documentation and Transparency Failures

  • Missing or incomplete technical documentation
  • Failure to maintain logs of AI system operation
  • Inadequate instructions for deployers
  • No conformity assessment documentation

Real-World Scenarios: When Penalties Apply

Scenario 1: Hiring AI Without Safeguards

A mid-sized company deploys AI to screen job applications without:

  • Conducting a conformity assessment
  • Implementing human oversight
  • Testing for bias
  • Documenting the system's decision logic

Potential Penalty: Up to €15 million (high-risk system violation)

Scenario 2: Undisclosed Chatbot

An e-commerce site uses an AI chatbot for customer service without informing users they're interacting with AI.

Potential Penalty: Up to €7.5 million (transparency violation)

Scenario 3: Unauthorized Emotion Recognition

A retail chain deploys emotion recognition cameras to analyze customer reactions without:

  • Informing customers
  • Conducting impact assessments
  • Obtaining necessary permissions

Potential Penalty: Up to €15 million if classified as high-risk, or €7.5 million for transparency failures

Scenario 4: Misleading Authority Claims

A company claims its AI system is "EU AI Act compliant" without proper conformity assessment, then provides false documentation to regulators.

Potential Penalty: Up to €7.5 million for information violations, plus potential €15 million for underlying compliance failures

Risk Factors That Increase Penalty Likelihood

Aggravating Factors:

  • Intentional non-compliance
  • Previous violations
  • Large scale of harm or affected individuals
  • Failure to cooperate with authorities
  • Attempts to conceal violations

Mitigating Factors:

  • Proactive compliance efforts
  • Self-reporting of issues
  • Cooperation with investigations
  • Quick remediation of problems
  • Implementation of compliance programs

How to Avoid Penalties: Practical Steps

1. Know Your Risk Category

Accurately classify each AI system. High-risk systems require the most attention.

2. Document Everything

Maintain comprehensive records of:

  • Risk assessments
  • Data sources and quality checks
  • Testing and validation results
  • Human oversight procedures
  • User communications

3. Implement Technical Safeguards

  • Robust testing before deployment
  • Ongoing monitoring after launch
  • Audit logs and traceability
  • Security measures to prevent misuse

4. Be Transparent

  • Clearly disclose AI use to affected individuals
  • Provide meaningful information about AI decision-making
  • Label AI-generated content
  • Inform users of their rights

5. Establish Governance

  • Designate responsible individuals for AI compliance
  • Create approval workflows for new AI deployments
  • Conduct regular compliance reviews
  • Train staff on obligations

6. Work with Compliant Vendors

  • Verify vendor compliance claims
  • Ensure contractual protection
  • Request documentation and certifications
  • Establish clear responsibility allocation

Timeline Considerations

Penalties apply according to the AI Act's phased implementation:

  • February 2025: Prohibited practices penalties in effect
  • August 2025: GP-AI provider penalties applicable
  • August 2026: High-risk system penalties fully applicable
  • August 2027: All provisions in force, including for existing systems

Organizations should prioritize compliance based on these dates, focusing first on avoiding prohibited practices, then preparing high-risk system controls.

The Bottom Line

The EU AI Act's penalties are designed to be dissuasive – large enough to encourage meaningful compliance even from major corporations. For SMEs, the percentage-based alternative provides some relief but still represents significant financial risk.

The best protection is proactive compliance: understanding your obligations, implementing appropriate controls, and maintaining documentation. Organizations that treat AI governance as an ongoing priority rather than a one-time checklist will be best positioned to avoid penalties while building trustworthy AI systems.

Remember: penalties aren't just about avoiding fines. They reflect real harms that poor AI practices can cause. Compliance protects not just your organization, but the people affected by your AI systems.

Ready to Take the Next Step?

Get the comprehensive guide or generate a customized AI policy for your organization.

Download Free Guide

118 pages + templates

Get the comprehensive EU AI Act compliance guide with actionable steps, risk frameworks, and ready-to-use templates.

Generate AI Policy

Customized for you

Create a professional, customized AI usage policy tailored to your organization's needs in minutes.

Both resources are designed specifically for mid-sized EU companies navigating AI governance.