If your company uses AI, you’ll want to read this blog post. Artificial intelligence is evolving lightning fast. As a result, we see new regulations designed to keep its use safe and ethical. Meet the AI Act: the EU’s first comprehensive legal framework for regulating AI across the Union. Let us break it down for you.
The AI Act is an EU regulation that establishes a uniform legal framework for the development, marketing, deployment and use of AI systems in the EU. The goal? To make sure AI technologies are safe, respect fundamental rights, and promote trust among users. Its ambition is to create a consistent approach to AI across member states.
The AI Act uses a risk-based framework: the riskier your AI system is, the stricter the rules you need to follow. Your AI technology will fall into one of four categories:
These are AI systems that manipulate decisions, evaluate social behavior or personal traits (the so-called social scoring), predict criminal behavior, scrape facial images, or categorize people based on biometric data.There are exceptions for law enforcement purposes like locating missing persons or preventing terrorist attacks.
These AI systems significantly affect users’ safety or fundamental rights, like those used in critical infrastructure, education, employment, public services, law enforcement, and emergency call systems. This category also includes AI used for biometric identification and large-scale content recommendations.
This means AI models trained with extensive datasets for a broad range of tasks, minus AI used only for research, development or prototyping before marketing.
If your AI doesn’t fall into any of the other three categories, you don’t have to comply with any specific rules. But the AI Act still applies, so make sure you implement a proper AI governance procedure.
To identify your obligations under the AI Act, you first need to find your position in the AI ecosystem. The AI Act differentiates between several roles:
If you’re a provider of a high-risk AI system, you need to register yourself and your AI system in the EU database before use.
For minimal or low-risk AI systems, focus on:
For high-risk and general-purpose AI systems, the requirements are stricter. For example, if you’re a provider, you have to implement risk management to identify and minimize risks, create and maintain detailed technical documentation, or keep event logs. If you’re a deployer, you have to assign a human to oversee the AI’s performance and make sure the input data is relevant, accurate and kept secure.
No matter your position in the AI chain, you should consider adopting a comprehensive AI governance framework. You should specifically focus on categorizing AI projects by risk, implementing appropriate compliance measures, recording actions in an audit-friendly way, training your staff, and tracking new legal developments.
The AI Act became effective on 1 August 2024. Compliance is phased in over a three-year period and the main deadline for full compliance is set for 2 August 2026. The regulation contains different deadlines for specific AI systems, so make sure to check it in detail. As always, it’s a good idea to start prepping early.
The AI Act sets standards for safe and ethical AI use across the EU. It applies to everyone involved with AI and adopts stricter rules for higher-risk systems. All companies must comply with the AI literacy and transparency obligations. The Act became effective on 1 August 2024 and the main deadline for compliance is 2 August 2026, so it’s high time to start preparing. Need a hand? We’re here to help— reach out for a consultation.