GDPR

GDPR Subject Access Requests When AI Processes Your Data

Published:

Home » Articles » GDPR Subject Access Requests When AI Processes Your Data
GDPR refresher course recently updated
Just updated! View the new course here.

When someone submits a subject access request (SAR), you have 30 days to respond. For most organisations, this is straightforward. You pull records from your CRM, export email exchanges, compile relevant documents, and send them off.

But what happens when the requested data sits inside AI training sets? Or when automated decisions come from neural networks trained on millions of data points? Artificial intelligence has transformed a routine compliance task into a technical challenge that many organisations aren’t prepared for.

The law hasn’t changed. Your obligations under UK GDPR remain the same. (If you serve EU customers, EU GDPR applies to them separately, though the SAR requirements are nearly identical.) But the way you fulfil them when AI is involved requires a different approach.

What must you disclose about AI processing?

Article 15 of the UK GDPR grants individuals the right to obtain confirmation that you’re processing their personal data. When you are, you must provide a copy of that data along with information about the processing, including the purposes, categories of data, and recipients.

When AI is part of your processing activities, Article 22 adds another layer. This article addresses automated decision-making, including profiling. If you’re making decisions that produce legal effects or similarly significant effects on individuals, you must provide “meaningful information about the logic involved, as well as the significance and the envisaged consequences” of such processing.

The ICO has clarified that “meaningful information about the logic involved” doesn’t mean you need to hand over source code or proprietary algorithms. What you do need to provide is a general explanation of how the system works. Think of it as explaining the method, not revealing the recipe.

This obligation only applies to decisions that have legal or similarly significant effects. Automated credit decisions, recruitment screening that determines whether a candidate progresses, or insurance pricing that affects coverage all likely meet this threshold. Product recommendations or content personalisation typically don’t.

The practical challenges

The theory sounds manageable. The practice is considerably harder.

First, there’s the problem of finding an individual’s data within training datasets. Many AI models are trained on large datasets where personal data has been anonymised or pseudonymised. If you can’t identify which data points relate to the SAR requester, how do you provide a copy? And if the data was genuinely anonymised before training, does it even count as personal data anymore?

Second, there’s the issue of data embedded in model parameters. Once an AI model is trained, individual data points don’t exist in an extractable form. The model’s weights and biases represent learned patterns from thousands or millions of examples. You can’t point to a specific neuron and say “this is John Smith’s data.” The Information Commissioner’s Office acknowledges this challenge, but the legal obligation to provide a copy of personal data doesn’t disappear just because it’s technically difficult.

Third, explaining how AI models produce outputs is the black box problem in another form. Modern machine learning models, particularly deep neural networks, make predictions based on patterns that aren’t easily interpretable. You might be able to say “the model considers 50 features including payment history, account age, and transaction patterns” but explaining why the model made a specific decision for a specific individual is often impossible, even for the data scientists who built it.

Finally, there’s the human oversight question. Many organisations claim to have “human in the loop” review of AI decisions. But there’s a difference between genuine oversight and rubber-stamping. If your staff member has 30 seconds to review an AI recommendation before approving it, and they approve 99% of cases, that’s not meaningful human involvement. The ICO is sceptical of purely formal human oversight that doesn’t involve actual consideration.

Compliance strategies

The good news is that these challenges have practical solutions, though they require planning before you deploy AI systems, not afterwards.

Start with privacy by design. Build traceability into your AI systems from the beginning. This means maintaining clear records of what data was used for training, documenting your model’s logic and decision-making process, and creating the ability to generate explanations for individual decisions. If your AI vendor can’t provide these capabilities, that’s a red flag.

Update your privacy notices to specifically address AI processing. Generic statements about “automated processing” don’t cut it anymore. Explain what AI systems you use, what they do, and what decisions they make. If decisions are automated without human involvement, say so. If there’s human oversight, explain what that actually means.

Implement genuine human oversight where decisions have significant effects. This means giving staff the time, information, and authority to override AI recommendations. They need to understand what factors the AI considered and have access to the actual data, not just a score or recommendation. Most importantly, they need the ability to apply human judgement without pressure to simply approve what the AI suggests.

Prepare SAR response protocols specifically for AI systems. Document where AI is used in your processing, what data it accesses, and how you’ll respond to requests for information about automated decisions. Your data protection team needs to know which systems use AI, how to extract relevant information, and who can explain the technical details in plain language.

The Data (Use and Access) Act 2025, which received Royal Assent in May, introduces the concept of “reasonable and proportionate” searching in response to SARs. This gives organisations more flexibility to push back against overly broad requests and to consider the effort required to retrieve information. For AI systems, this might mean you can explain general processing rather than attempting to reconstruct every data point that influenced a model. But proportionality is a defence against excessive requests, not a reason to provide less information than the law requires.

Building compliance into your AI strategy

AI creates genuine challenges for SAR compliance, but your legal obligations haven’t changed. Individuals still have the right to access their data and understand how decisions about them are made.

The key is building data traceability into your AI systems from the start. If you wait until someone submits a SAR to figure out how you’ll respond, you’re too late. Document your AI processing, maintain clear records, and ensure your team understands both the technical and legal requirements.

The ICO continues to develop its guidance on AI and data protection. Their existing guidance on automated decision-making and AI provides a solid foundation, but this is an evolving area. Stay alert to new guidance and be prepared to adapt your processes.

If you’re implementing AI systems or responding to SARs involving automated processing, proper training is essential. Measured Collective offers specialist GDPR training that covers AI-specific challenges and practical compliance strategies. Visit measuredcollective.com to learn more about our training programmes.

Author

  • Scott Dooley is a seasoned entrepreneur and data protection expert with over 15 years of experience in the tech industry. As the founder of Measured Collective and Kahunam, Scott has dedicated his career to helping businesses navigate the complex landscape of data privacy and GDPR compliance.

    With a background in marketing and web development, Scott brings a unique perspective to data protection issues, understanding both the technical and business implications of privacy regulations. His expertise spans from cookie compliance to implementing privacy-by-design principles in software development.

    Scott is passionate about demystifying GDPR and making data protection accessible to businesses of all sizes. Through his blog, he shares practical insights, best practices, and the latest developments in data privacy law, helping readers stay informed and compliant in an ever-changing regulatory environment.

    View all posts

GDPR Online Training Course

There's no time like now, to give your team the training they need.

Read more:
Do I need ongoing GDPR training?