The world's first comprehensive AI regulation, the EU AI Act, officially came into force on August 2, 2024. This landmark legislation, born from lengthy negotiations, aims to foster trustworthy AI development while simultaneously spurring innovation. For the public sector, which increasingly relies on AI to streamline services and engage citizens, this new framework presents both opportunities and significant adjustments. As we approach October 21, 2025, the practical implications for government bodies are becoming clearer.
At its heart, the AI Act employs a risk-based approach. Think of it like a tiered system: some AI applications are outright banned due to unacceptable risks, while others, like spam filters or video games, fall into the minimal risk category with few obligations. However, the real focus, and where much of the public sector's AI use will land, is on 'high-risk' systems. These are AI applications deemed likely to impact fundamental rights and freedoms, or those integrated into products already subject to EU safety regulations, such as medical devices.
For these high-risk systems, organizations will face a raft of new obligations starting from August 2, 2026. It's not just about the AI itself, but also about where you sit in the AI's 'value chain.' If your public authority develops its own AI or commissions bespoke systems, you're considered a 'provider.' This means you'll need to meticulously document your systems, manage risks, and ensure your AI adheres to the rules. But even if you're simply using an AI system developed elsewhere, as a 'deployer,' you'll have responsibilities. This includes implementing robust technical and organizational security measures, ensuring mandatory human oversight with the right training and authority, and crucially, conducting a Fundamental Rights Impact Assessment (FRIA) to identify and mitigate potential harms to citizens. High-risk AI system usage will also need to be registered in an EU database.
This dual role of provider and deployer means public sector entities need a clear understanding of their AI landscape. The Act's structure, with stringent rules for high-risk AI and lighter touch requirements for others, offers some breathing room for AI used in less sensitive administrative tasks, primarily focusing on transparency. Yet, the broad spectrum of AI applications in public services means many will fall under the stricter high-risk category, demanding proactive compliance efforts. The coming months are critical for public sector leaders to assess their current AI deployments, understand their obligations under the Act, and prepare for the full implementation of these new regulations.
