EU AI Act Compliance Deadlines: What Businesses Need to Know and How to Prepare

The EU AI Act introduces the world's first comprehensive artificial intelligence regulation, setting strict compliance deadlines for businesses using AI. With key milestones approaching—starting with the ban on prohibited AI practices in February 2025 and extending to high-risk AI compliance by 2026—organizations must act now to ensure transparency, governance, and risk management. This article breaks down the critical deadlines, their impact on businesses, and how companies can prepare to stay compliant while leveraging AI responsibly.

Understanding the EU AI Act and Its Impact on Businesses

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI regulation, setting harmonized rules for the development, deployment, and use of artificial intelligence within the European Union.

The regulation was adopted on June 13, 2024, published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024. It establishes a risk-based framework that classifies AI systems based on prohibited, high-risk, general-purpose, and limited-risk categories.

As compliance deadlines approach, businesses using AI must take immediate action to ensure they adhere to transparency, accountability, and data governance requirements. Non-compliance could result in significant fines, making it critical for organizations to prepare now.

The Rise of Generative AI and Why the EU AI Act Matters

In the 18 months since OpenAI, backed by Microsoft, released ChatGPT, investment in generative AI has skyrocketed. AI models capable of generating text, images, audio, and video are now widely used across industries, transforming marketing, content creation, and business automation.

However, concerns have emerged regarding AI model training data. Many AI companies have been accused of using copyrighted materials—such as books, articles, and Hollywood movies—without proper authorization from their creators.

To address these concerns, the EU AI Act requires organizations using general-purpose AI (GPAI) models—such as ChatGPT, Midjourney, and DALL·E—to disclose detailed summaries of training data sources. This ensures greater transparency and protects intellectual property rights within the AI industry.

Key EU AI Act Compliance Deadlines

February 2, 2025 – Prohibited AI Practices Take Effect

From this date, certain AI practices will be banned under Article 5. These include AI systems that:

  • Manipulate human behavior through subliminal techniques, causing significant harm.
  • Exploit vulnerabilities of people based on age, disability, or socio-economic status.
  • Engage in social scoring, where individuals are evaluated based on behavior or personal characteristics.
  • Use biometric data for predictive policing, assessing crime risk without verified evidence.
  • Deploy untargeted facial recognition AI, scraping biometric data from public sources.
  • Use emotion recognition AI in workplaces and schools, except for safety or medical purposes.
  • Perform biometric categorization, inferring race, religion, political beliefs, or sexual orientation.

August 2, 2025 – General-Purpose AI (GPAI) Compliance

This deadline applies to general-purpose AI models (GPAI), which include large language models (LLMs) and AI-powered content generation tools.

What businesses must do:

  • AI developers must provide clear documentation of the training datasets used to develop models like ChatGPT, Gemini, Claude, and LLaMA.
  • Organizations integrating GPAI into AI applications must disclose AI-generated content and implement transparency measures.

August 2, 2026 – High-Risk AI and Transparency Obligations

This is the primary compliance deadline for most businesses, particularly those using high-risk AI.

The AI Act defines high-risk AI systems under Annex III, covering:

  • AI used in workplace decision-making (e.g., hiring, promotions, performance monitoring).
  • AI in healthcare and financial services, such as diagnostics, loan approvals, and fraud detection.
  • AI in critical infrastructure, including energy, transportation, and law enforcement.
  • AI-powered public services, such as education, border control, and social benefits management.

What businesses must do:

  • High-risk AI systems must comply with strict transparency, governance, and risk management requirements.
  • Users must be informed when interacting with AI, particularly in customer service chatbots, deepfake content, and automated decision-making systems.

August 2, 2027 – AI in Product Safety Regulations (Annex I)

AI integrated into regulated products (e.g., medical devices, self-driving cars, and industrial automation) will be subject to additional compliance requirements under EU product safety laws.

What businesses must do:

  • Manufacturers and AI developers must ensure compliance with high-risk AI standards for regulated products.

How Businesses Can Prepare for EU AI Act Compliance

1. Assess and Classify AI Systems

  • Identify whether your AI system falls under prohibited, high-risk, general-purpose, or low-risk categories.
  • Ensure AI use cases align with EU AI Act risk classification criteria.

2. Strengthen AI Governance and Documentation

  • Implement AI governance frameworks to track model training, testing, and deployment.
  • Maintain AI model documentation, covering datasets, decision-making processes, and risk mitigation strategies.

3. Ensure Traceability, Explainability, and Transparency

  • High-risk AI → Establish traceability of datasets, decision logic, and AI outputs.
  • General-purpose AI → Implement transparency measures for content generation and user interactions.

4. Align with Future Regulatory Updates

  • The European Commission will issue further guidance in 2025 and 2026 on:
    1. High-risk AI assessments
    2. Serious incident reporting
    3. Additional compliance obligations

How Dawiso Can Help with AI Compliance and Governance

Dawiso provides metadata management and AI governance solutions to help businesses comply with the EU AI Act.

1. AI Use Case Catalog and Documentation

  • Maintain a centralized repository for AI use cases.
  • Track training data, model versions, and compliance records for regulatory reporting.

2. Risk-Based AI Assessment and Governance

  • Evaluate AI model risks using Dawiso’s metadata management and lineage tracking.
  • Conduct structured AI risk assessments to ensure compliance.

3. AI Lifecycle Management and Audit Readiness

  • Track AI development from training to deployment.
  • Generate compliance reports for audits and regulatory inspections.

4. AI Transparency and Regulatory Compliance

  • Ensure AI-generated content disclosures comply with transparency requirements.
  • Prepare for high-risk AI obligations in 2026 and 2027.

More information about the solution here.

Final Thoughts: Act Now to Stay Compliant

The EU AI Act is one of the most significant AI regulations to date, reshaping AI compliance, data governance, and ethical AI deployment. Businesses that prepare now will benefit from regulatory compliance, enhanced trust, and a competitive advantage in the AI-driven economy.

Dawiso helps companies navigate AI governance and compliance, ensuring transparency, accountability, and risk management.

Start preparing today to align with the EU AI Act.

Petr Mikeška
Dawiso CEO

More like this

Keep reading and take a deeper dive into our most recent content on metadata management and beyond: