The EU AI Act’s most urgent provisions — the outright ban on certain categories of artificial intelligence — came into force on 2 February 2025. For organisations operating across Europe, the clock has been running ever since. With enforcement now active, HR and compliance leaders who have not yet audited their AI tool stack face real regulatory exposure.
This article explains which AI systems are now banned, why they matter to your organisation, and what concrete steps you should take today.
What Happened
The EU AI Act (Regulation (EU) 2024/1689), adopted in August 2024, introduced the world’s first comprehensive AI regulatory framework. It applies a tiered risk classification — unacceptable risk, high risk, limited risk, and minimal risk — with outright prohibition reserved for the most dangerous applications.
The prohibited systems provisions are the first to take legal effect, with no transition period. They apply to AI system providers (those who develop or deploy AI), deployers (businesses that use AI tools in practice), and importers and distributors of AI systems within the EU. Critically, this means UK organisations with EU operations, EU employees, or EU-facing services may be caught.
The Privacy Implications
Eight categories of AI system are now prohibited under Article 5 of the EU AI Act. The ones most likely to affect HR and corporate compliance teams are:
1. Emotion recognition in the workplace and education institutions
Any AI system that infers employees’ emotional states — whether through facial analysis, voice tone detection, or physiological data — is now banned in professional settings. This includes AI-driven recruitment tools that assess candidate emotion during video interviews, and productivity monitoring software that analyses employee sentiment.
2. Biometric categorisation using sensitive characteristics
AI systems that categorise individuals based on biometric data to infer or deduce protected characteristics — including political opinions, religious beliefs, sexual orientation, or race — are prohibited.
3. Social scoring by public authorities
General-purpose AI systems that evaluate or classify individuals based on social behaviour or personal characteristics, resulting in detrimental treatment, are banned. While primarily aimed at government, commercial operators should review any scoring systems used in access decisions.
4. Subliminal manipulation
AI that operates below conscious awareness to significantly impair a person’s decision-making — causing harm — is prohibited. Marketing teams using AI-driven personalisation at scale should review whether their tooling crosses this threshold.
5. Exploitation of vulnerabilities
AI that deliberately exploits the vulnerabilities of specific groups (age, disability, social circumstances) to distort behaviour in a way that causes harm is banned.
6. Criminal risk assessment based solely on profiling
AI systems that assess the likelihood of an individual committing a criminal offence based solely on profiling of personal characteristics — without relying on factual evidence directly associated with criminal conduct — are prohibited. This targets predictive-policing tools and any profiling-only risk scoring used in security, access, or screening contexts.
7. Real-time remote biometric identification in public spaces (by law enforcement)
Largely aimed at state actors, but notable for organisations providing AI infrastructure to public bodies.
8. Untargeted scraping of facial images for facial recognition databases
Building or expanding facial recognition databases by scraping images from the internet or CCTV footage without consent is prohibited — directly relevant to any organisation using AI-powered security systems.
Fines for breaches of the prohibited systems provisions can reach €35 million or 7% of global annual turnover, whichever is higher. The formal penalty regime for Article 5 breaches became enforceable from 2 August 2025.
Industry Reaction
The European Data Protection Board (EDPB) has confirmed that national data protection authorities will be among the designated market surveillance authorities for AI Act enforcement, bringing the Act into direct alignment with GDPR enforcement infrastructure. The EDPB’s Statement 3/2024 on data protection authorities’ role in the AI Act framework makes clear that many prohibited AI practices also constitute GDPR violations — creating dual-track exposure for non-compliant organisations.
The ICO has issued its own commentary on the AI Act, noting that while the Act does not apply directly to UK-only operations (post-Brexit), UK organisations with EU presence must comply. The ICO has also signalled it will track EU AI Act enforcement closely as it develops its own AI guidance under the UK GDPR.
What Managers Should Watch
HR Teams
- Audit your recruitment tech stack immediately. Video interview platforms, personality assessment tools, and CV screening AI are the highest-risk category. Confirm with vendors whether any emotion detection, biometric analysis, or inferred characteristic scoring is embedded in the product.
- Review employee monitoring software. Productivity analytics tools that go beyond activity logging into sentiment or emotion inference are now prohibited for EU-based employees.
- Check your workplace AI policy. If your organisation has deployed or is piloting AI tools for performance management, absence prediction, or wellbeing monitoring, legal review is essential before continuing use.
Senior Leadership
- This is not just an EU issue if you have EU operations. Subsidiaries, EU-facing digital products, and EU employee data all bring the Act into scope. A group-level review of AI deployment is overdue.
- Document your AI inventory now. Regulators expect organisations to maintain records of AI systems in use. Lack of documentation compounds non-compliance risk.
- Engage legal counsel on grey-area tools. Several widely-used commercial products sit in the zone between what is clearly prohibited and what is clearly permitted. Get legal opinion before February audits expire.
Marketing
- Review AI personalisation tools for subliminal manipulation risk. Behavioural targeting AI that uses psychological profiling techniques to exploit emotional states or cognitive biases should be assessed against Article 5 criteria.
- Facial recognition in retail or event contexts needs urgent review. Foot traffic analytics and in-store AI systems that rely on biometric identification face the highest risk of prohibition.
Related Compliance Considerations
The EU AI Act does not exist in isolation. High-risk AI systems (the tier below prohibited) will face additional obligations from August 2026, including conformity assessments, transparency requirements, and human oversight mandates. HR software, recruitment AI, and employment management tools are explicitly listed as high-risk under Annex III of the Act.
Organisations that have not yet begun AI Act compliance work should prioritise a risk classification exercise covering all AI tools in use. The prohibited systems rules are the floor — not the ceiling — of what is coming.
The EU AI Act applies to organisations operating in the EU or providing AI systems to EU users. UK-only organisations are not directly bound by the Act but should monitor ICO guidance. This article does not constitute legal advice.
External sources:
– EU AI Act — Official Text (EUR-Lex)
– EDPB Statement 3/2024 on DPAs and the AI Act
– ICO Commentary on the EU AI Act
