EDPB Issues New Guidelines on AI Systems and Personal Data — Key Changes for EU Organisations

Claude Tester
7 min read · Apr 26, 2026

Overview of the New Guidelines

The European Data Protection Board (EDPB) adopted Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models in December 2024 (published with full guidance in early 2025). This guidance provides the most comprehensive EDPB statement yet on how EU GDPR applies to the development, deployment, and operation of artificial intelligence systems that involve personal data.

The guidelines are directly relevant to any organisation operating in the EU or processing EU residents’ personal data as part of AI-driven workflows — from HR analytics tools and customer-facing chatbots to automated decision-making systems used in recruitment, credit scoring, or marketing personalisation.

For the official EDPB publication, see: EDPB Opinion 28/2024 on privacy implications of AI models


Key Changes from Previous Guidance

Until this opinion, organisations developing or deploying AI systems had to piece together their GDPR obligations from existing guidelines on automated decision-making (Article 22), lawful basis, purpose limitation, and data minimisation. EDPB Opinion 28/2024 consolidates and significantly extends this picture.

The most significant clarifications and changes are:

1. Legitimate interest for AI training data
The EDPB clarified when “legitimate interest” (Article 6(1)(f)) may — and may not — serve as a lawful basis for processing publicly available personal data to train AI models. The guidance imposes a strict balancing test and requires documented evidence that the data subject’s fundamental rights do not override the controller’s interests. Reliance on legitimate interest is not a default; it requires case-by-case assessment.

2. Purpose limitation and AI model outputs
The EDPB addressed the risk of “function creep” in AI systems, where models trained for one purpose are subsequently used in ways incompatible with the original purpose. Organisations must assess compatibility before deploying AI outputs in new use cases — even internal ones.

3. Anonymisation of AI models
The EDPB provided guidance on when an AI model (or its outputs) can be considered to have been “anonymised” for GDPR purposes. The threshold is high: a model that can reconstruct or infer personal data about specific individuals is not anonymised, regardless of whether it was trained on anonymised inputs. This directly affects assumptions many organisations currently make about model deployment.

4. Consequences of unlawful training
The EDPB introduced the concept of “contaminated” models — AI systems trained on data obtained unlawfully cannot simply be remediated by deleting the original data. Depending on the severity, the model itself may need to be deleted. This has major implications for organisations using third-party AI models or APIs where training data provenance is unclear.


Who Is Affected

These guidelines are directly relevant to any organisation that:

  • Develops, fine-tunes, or procures AI systems trained on personal data
  • Uses AI tools for HR decisions (e.g., CV screening, performance analytics, workforce planning)
  • Deploys AI-powered marketing personalisation, customer segmentation, or recommendation engines
  • Uses automated decision-making or profiling systems where EU residents’ data is involved
  • Acts as a data processor for EU-based controllers deploying AI systems

Even organisations that do not develop AI internally are affected: if you use third-party AI tools (including large language models, analytics platforms, or HR tech) that process personal data of EU residents, your vendor relationships and Data Processing Agreements may need review in light of this guidance.



What Managers Need to Do Now

HR Teams

  • Audit your HR technology stack. Identify which HR tools use AI or machine learning — including applicant tracking systems, engagement platforms, workforce analytics, and performance management tools. Confirm whether EU employee or candidate data is being processed within these systems.
  • Review automated decision-making. If your organisation uses AI to screen CVs, rank applicants, or make employment decisions with significant effects, Article 22 obligations (right not to be subject to automated decisions) apply. Ensure employees and candidates are informed and can request human review.
  • Update DPIAs for HR AI systems. For any high-risk AI processing involving EU employee or candidate data, conduct or update your Data Protection Impact Assessment in line with the new EDPB guidance.
  • Verify vendor compliance. Where HR technology providers process EU employee data using AI, review their DPAs and request documentation of their lawful basis for AI model training and deployment.

Senior Leadership

  • Conduct an AI data inventory. Establish a complete picture of where AI is used within your organisation, what personal data it processes, and the lawful basis claimed for each use case.
  • Commission a purpose limitation review. Where AI systems currently in use are being considered for new applications (e.g., repurposing a customer analytics model for HR), commission a legal assessment of purpose compatibility before deployment.
  • Reassess third-party AI model risk. If your organisation relies on AI provided by third parties — including foundation model APIs — request clarity on training data provenance. The “contaminated model” doctrine means that downstream processors may bear risk if training data was unlawfully sourced.
  • Ensure DPIA coverage for AI. The EDPB guidance signals increased regulatory focus on AI-related DPIAs. Board-level accountability for DPIA completion and documentation should be confirmed.
  • Engage your DPO. This guidance significantly expands the scope of obligations for AI deployments. Your Data Protection Officer should review current AI use cases against the new standards and advise on remediation priorities.

Marketing

  • Review AI-driven personalisation and segmentation. Marketing platforms that use AI to personalise content, target advertising, or segment EU customers require a clear lawful basis. Reliance on legitimate interest must now withstand the EDPB’s strict balancing test.
  • Audit consent for profiling. If your marketing AI involves profiling EU consumers to predict preferences or behaviour, review whether valid, specific consent has been obtained where required.
  • Assess compatibility for cross-campaign data reuse. Using data collected for one marketing campaign to train or inform AI models for a different campaign may trigger purpose limitation issues under the new guidance. Seek legal review before proceeding.

EU GDPR vs UK GDPR: What UK Organisations Should Note

The EDPB’s Opinion 28/2024 applies directly to EU GDPR and is addressed to EU supervisory authorities and organisations processing EU residents’ personal data.

UK organisations are regulated under UK GDPR and overseen by the ICO — not the EDPB. EU GDPR decisions and EDPB opinions do not automatically bind UK organisations following Brexit.

However, there are important reasons for UK organisations to monitor this guidance closely:

  • International data transfers: UK organisations that transfer personal data to the EU, or that use AI systems processing data of both UK and EU residents, must comply with EU GDPR for the EU data elements. EDPB guidance directly affects those obligations.
  • ICO alignment: The ICO monitors and often aligns with EDPB positions. UK guidance on AI and data protection is actively developing, and EDPB positions on issues like legitimate interest and purpose limitation frequently influence ICO thinking.
  • Vendor due diligence: UK organisations procuring AI tools from EU-based vendors, or from US vendors serving the EU market, may face contractual requirements to comply with EDPB-aligned standards.

UK organisations should monitor the ICO’s dedicated AI and data protection guidance hub for UK-specific positions on AI and GDPR.


Compliance Checklist

  • [ ] AI system inventory completed — all systems involving EU personal data identified
  • [ ] Lawful basis documented for each AI use case (legitimate interest balancing tests completed where applicable)
  • [ ] Purpose limitation compatibility assessment completed for any planned new AI use cases
  • [ ] DPIAs conducted or updated for high-risk AI processing
  • [ ] Data Processing Agreements with AI vendors reviewed against Opinion 28/2024 standards
  • [ ] Automated decision-making (Article 22) obligations assessed for HR and customer-facing AI

Related Resources & Training


Strengthen Your Team’s Knowledge

Is your organisation keeping pace with the rapidly evolving intersection of AI and data protection law? Our AI & GDPR Compliance Training is designed for HR professionals, marketing teams, and senior leaders who need to understand their obligations when deploying AI systems involving personal data. Our Data Protection Impact Assessment Masterclass provides hands-on guidance for completing compliant DPIAs for high-risk processing activities, including AI.

Author