It’s easy to feel a bit overwhelmed when new regulations land, especially something as sweeping as the EU AI Act. But as we look towards November 2025, it’s crucial for those in the public sector to get a handle on what this landmark legislation means for how they use artificial intelligence.
The AI Act, which officially came into force on August 2, 2024, isn't just about restricting AI; it's a carefully crafted framework aiming to foster trustworthy AI development while simultaneously encouraging innovation. For public sector bodies, which are increasingly relying on AI to streamline services and offer more tailored support to citizens, this new law presents both challenges and opportunities.
At its heart, the AI Act operates on a risk-based approach. Think of it like a tiered system: some AI applications are outright banned because they pose an unacceptable risk to fundamental rights. Others, like simple spam filters or video games, are considered minimal risk and face very few obligations. The real focus, however, is on high-risk AI systems. These are systems deemed risky either because they are integral to products already regulated under EU law (like medical devices) or because they are identified by the Commission as potentially impacting citizens' fundamental rights and freedoms. And, of course, there's a special category for general-purpose AI models, including generative AI, acknowledging their rapid rise and unique characteristics.
For many public sector AI use cases, particularly those that could significantly affect citizens' lives, the Act introduces a host of new obligations that will become effective starting August 2, 2026. This means the period leading up to November 2025 is prime time for preparation.
What kind of obligations are we talking about? Well, it depends on your role in the AI value chain. If your public authority develops its own AI systems or commissions bespoke ones, you're considered a 'provider.' In this capacity, you'll have responsibilities related to documentation and transparency, especially for limited-risk systems and general-purpose AI. But if you're actually using an AI system – whether you built it or not – you're a 'deployer.'
For high-risk AI systems, deployers face even more stringent requirements. This includes implementing robust technical and organizational security measures, ensuring mandatory human oversight with competent and authorized personnel, and crucially, conducting a Fundamental Rights Impact Assessment (FRIA). This assessment is designed to proactively identify and mitigate potential risks to individuals. Furthermore, public sector deployers will need to register their use of high-risk AI systems in an EU database.
It’s a lot to digest, but the underlying principle is clear: the AI Act seeks to ensure that AI used within the public sector is not only effective but also fair, transparent, and respectful of fundamental rights. The coming months are critical for understanding these nuances, assessing current AI deployments, and planning for compliance. By November 2025, public sector organizations should have a solid grasp of their obligations and a clear roadmap for navigating the AI Act's landscape.
