It feels like just yesterday we were hearing whispers about the EU's Artificial Intelligence Act, and now, here we are, looking at its phased implementation. For those of us keeping an eye on how AI is shaping up globally, this is a pretty significant development. The Act officially became law on July 12, 2024, after being published in the Official Journal of the EU. That publication date was the final hurdle, meaning it officially entered into force on August 1, 2024.
But here's the crucial part: it's not a 'switch on, everything's in effect' kind of deal. The EU AI Act is rolling out in stages, and understanding these phases is key. The first set of rules that kicked in, as of February 2025, are the prohibitions on certain AI applications. Think about AI systems that might exploit people's vulnerabilities, or the untargeted scraping of facial images from public spaces to build facial recognition databases – those are now off-limits.
Looking ahead, the requirements for general-purpose AI (GPAI) models are set to apply a year after the Act entered into force, which brings us to August 2025. This means developers and deployers of these foundational models will have specific obligations to meet by then. It's a thoughtful approach, aiming to balance innovation with fundamental rights and safety, a core principle of this risk-based regulation.
The Act's reach is quite extensive, with what's often called 'long-arm jurisdiction.' This means it applies not only to AI systems developed or used within the EU but also to those offered on the EU market, regardless of where the provider is located. If you're deploying an AI system in the EU, or if its output is used there, you're likely within its scope. This also extends to intermediaries involved in bringing AI systems into the EU's commercial supply chain.
The risk-based framework is the heart of the Act. It categorizes AI systems into different levels of risk, each with corresponding compliance obligations. We've got 'unacceptable risk' systems, which are strictly prohibited – these include practices like using subconscious techniques to distort behavior, social scoring systems that lead to disadvantage, and real-time remote biometric identification in public spaces (with very narrow exceptions for things like counter-terrorism).
Then there are 'high-risk' AI systems. These are subject to stringent compliance duties. They fall into two main categories: those that are components of products already regulated by EU safety laws (like medical devices or cars) and those that operate independently in eight critical areas. These critical areas include biometric identification, managing vital infrastructure, education and employment, access to essential public services (like credit scoring), law enforcement, migration, and the administration of justice.
For systems deemed 'limited risk,' the focus is on transparency. This means if you're interacting with a chatbot, for instance, you should be informed that you're talking to an AI. Similarly, systems involving emotion recognition or deepfakes and synthetic content need to be clearly marked, often in a machine-readable format.
It's a complex but necessary piece of legislation, aiming to provide clarity and a framework for responsible AI development and deployment. As these phases continue to roll out, staying informed will be key for anyone involved in the AI ecosystem.
