It feels like just yesterday we were all talking about the potential of AI, and now, here we are, deep in the practicalities of its regulation. For those of us keeping a close eye on how artificial intelligence is being shaped, especially in Europe, the EU AI Act has been a constant topic. And as we head into October 2025, there's a significant update that's worth unpacking: the implementation of the Act's prohibited AI practices.
Back on February 4th, 2025, the European Commission released its Guidance on Prohibited AI Practices. This wasn't just a minor addendum; it was a crucial step to ensure that the bans laid out in the AI Act are understood and applied consistently across the board. Think of it as the rulebook getting a detailed user manual, designed to help regulators, AI system developers, and those who deploy these systems navigate the complexities.
The AI Act itself, which came into effect on August 1st, 2024, is built around a risk-based approach. It categorizes AI systems into four tiers, with the most concerning ones, those posing an unacceptable risk to fundamental rights and EU values, being explicitly forbidden under Article 5. These prohibitions officially kicked in on February 2nd, 2025, six months after the Act's general entry into force. This is precisely why the guidance document is so timely and important.
So, what exactly are these AI practices that are now off-limits? The list is quite specific and aims to prevent AI from being used in ways that could be deeply harmful or manipulative.
A Look at the Prohibited Practices
At its core, Article 5 of the AI Act targets systems that could cause significant harm and distort human behavior. This includes:
- Subliminal, Manipulative, or Deceptive Techniques: AI systems that exploit vulnerabilities, whether consciously or unconsciously, to alter people's behavior in a way that leads to harm. Imagine AI nudging you towards decisions you wouldn't otherwise make, or presenting information in a misleading fashion.
- Exploiting Vulnerabilities: This specifically calls out AI that preys on the weaknesses of certain groups, such as children, people with disabilities, or those in vulnerable socio-economic situations. The goal is to prevent AI from exacerbating existing inequalities or causing distress.
- Social Scoring: AI systems that assign scores based on social behavior or personal characteristics, leading to detrimental or unfair treatment. This is a big one, aiming to prevent a future where your social standing is dictated by an algorithm in ways that aren't proportionate or fair.
- Predicting Criminal Risk Based on Personality: AI that assesses or predicts an individual's risk of committing a crime solely based on their profile or personality traits. The emphasis here is on 'solely,' suggesting that predictions must be grounded in more objective, verifiable facts.
- Untargeted Facial Image Collection: The creation or expansion of facial recognition databases by indiscriminately scraping facial images from the internet or CCTV. This is a direct response to privacy concerns and the potential for mass surveillance.
- Emotion Recognition in Workplaces and Education: AI systems that infer emotions in these settings, except for specific medical or safety purposes. The idea is to protect individuals from being constantly monitored for their emotional state, which could lead to unfair judgments or a chilling effect on expression.
- Biometric Categorization of Sensitive Traits: Using biometric data to classify individuals based on sensitive characteristics like race, political opinions, or sexual orientation. This aims to prevent AI from making discriminatory classifications based on inherent personal traits.
- Real-time Remote Biometric Identification in Public Spaces: This is a significant prohibition, barring the use of live facial recognition by law enforcement in public spaces, with very limited exceptions. The aim is to strike a balance between security needs and fundamental rights to privacy and freedom of assembly.
Why Now? The Push for Clarity
The EU's approach is clearly about fostering trust in AI. By clearly defining what's not acceptable, they're setting a baseline for responsible innovation. The guidance document is crucial because it clarifies the nuances of these prohibitions, offering examples and interpretations to ensure that the law is applied uniformly. It's not just about saying 'no' to certain AI practices; it's about ensuring that the 'yes' to AI development is built on a foundation of ethical principles and respect for human rights.
It's a complex landscape, and as these regulations continue to evolve, staying informed is key. The EU AI Act, with its recent guidance on prohibited practices, is a significant step towards a future where AI serves humanity responsibly. As we move through October 2025 and beyond, we'll undoubtedly see how these rules play out in practice, shaping the AI we interact with every day.
