The EU AI Act introduces the world's first comprehensive artificial intelligence regulation, setting strict compliance deadlines for businesses using AI. With key milestones approaching—starting with the ban on prohibited AI practices in February 2025 and extending to high-risk AI compliance by 2026—organizations must act now to ensure transparency, governance, and risk management. This article breaks down the critical deadlines, their impact on businesses, and how companies can prepare to stay compliant while leveraging AI responsibly.
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI regulation, setting harmonized rules for the development, deployment, and use of artificial intelligence within the European Union.
The regulation was adopted on June 13, 2024, published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024. It establishes a risk-based framework that classifies AI systems based on prohibited, high-risk, general-purpose, and limited-risk categories.
As compliance deadlines approach, businesses using AI must take immediate action to ensure they adhere to transparency, accountability, and data governance requirements. Non-compliance could result in significant fines, making it critical for organizations to prepare now.
In the 18 months since OpenAI, backed by Microsoft, released ChatGPT, investment in generative AI has skyrocketed. AI models capable of generating text, images, audio, and video are now widely used across industries, transforming marketing, content creation, and business automation.
However, concerns have emerged regarding AI model training data. Many AI companies have been accused of using copyrighted materials—such as books, articles, and Hollywood movies—without proper authorization from their creators.
To address these concerns, the EU AI Act requires organizations using general-purpose AI (GPAI) models—such as ChatGPT, Midjourney, and DALL·E—to disclose detailed summaries of training data sources. This ensures greater transparency and protects intellectual property rights within the AI industry.
From this date, certain AI practices will be banned under Article 5. These include AI systems that:
This deadline applies to general-purpose AI models (GPAI), which include large language models (LLMs) and AI-powered content generation tools.
What businesses must do:
This is the primary compliance deadline for most businesses, particularly those using high-risk AI.
The AI Act defines high-risk AI systems under Annex III, covering:
What businesses must do:
AI integrated into regulated products (e.g., medical devices, self-driving cars, and industrial automation) will be subject to additional compliance requirements under EU product safety laws.
What businesses must do:
Dawiso provides metadata management and AI governance solutions to help businesses comply with the EU AI Act.
More information about the solution here.
The EU AI Act is one of the most significant AI regulations to date, reshaping AI compliance, data governance, and ethical AI deployment. Businesses that prepare now will benefit from regulatory compliance, enhanced trust, and a competitive advantage in the AI-driven economy.
Dawiso helps companies navigate AI governance and compliance, ensuring transparency, accountability, and risk management.
Start preparing today to align with the EU AI Act.
Keep reading and take a deeper dive into our most recent content on metadata management and beyond: