The UK Just Quietly Flipped the Switch on Automated Decisions

Scott Dooley
5 min read · Feb 12, 2026

There’s a question doing the rounds in data protection circles: do the UK’s new automated decision-making rules actually change anything? It’s a fair question. But I think it’s the wrong one.

On 5 February 2026, the Data (Use and Access) Act 2025 rewrote the rules on automated decision-making in the UK. The old Article 22 of the UK GDPR said organisations generally couldn’t make solely automated decisions that significantly affected people — unless they had explicit consent, a legal basis, or a contractual need. The default was prohibition. Now, four new articles (22A through 22D) replace that entirely. The default is permission.

That’s not a tweak. That’s a philosophical inversion.

Under the new framework, if your automated system processes ordinary personal data (not special category data like health, race, or political opinions), you can make solely automated decisions about people using any lawful basis you like — including legitimate interests. You just have to offer some safeguards: tell people the decision happened, let them make representations, provide human intervention if they ask, and give them a way to contest it.

For special category data, the old restrictions mostly remain. Explicit consent, substantial public interest, or a contractual-plus-public-interest combination. So if your algorithm is processing someone’s health records to deny them insurance, you’re still in prohibition territory. But if your credit scoring model uses postcodes, income, and spending patterns to reject a loan application with no human ever looking at it? That’s now fine, provided you tick the safeguard boxes.

Here’s where it gets interesting.

You could argue the old exceptions were “quite broad” anyway. Fair point — in practice, many organisations were already finding ways to justify their automated processing under the old regime. But the shift matters because it changes who bears the burden. Before, the organisation had to demonstrate it fit within a narrow exception. Now, the individual has to invoke their rights after the fact. That’s a meaningful difference, even if the practical outcomes look similar on paper.

Debevoise & Plimpton put together a useful comparison of the UK and EU positions. The EU still runs prohibition-first under Article 22. The UK now runs permission-first with safeguards. If you operate in both jurisdictions, you need separate compliance frameworks. Bird & Bird noted this takes the UK closer to the approach under the old Data Protection Act 1998, before the GDPR arrived. Progress, apparently, means going backwards.

The bit that deserves more attention is Article 22D. It gives the Secretary of State the power to issue secondary legislation that can redefine what counts as a “significant decision” and when “meaningful human involvement” has taken place. The real significance of these amendments may only become clear when those regulations appear. The primary legislation is the frame. The secondary legislation will be the painting.

I keep coming back to the Oxford Law Blogs piece from January that picks this apart for credit decisions specifically. Their argument is sharp: by limiting the strongest protections to special category data, the DUAA ignores a well-documented problem. Non-sensitive data like postcodes and shopping habits can serve as strong proxies for race, ethnicity, and socioeconomic status. Your algorithm doesn’t need to know someone’s ethnicity to discriminate based on it. It just needs their postcode.

And the UK Government knows this. In its own ECHR memorandum for the legislation, the government acknowledged that the reforms are “likely to increase the level of Article 22 processing” and that this “could potentially lead to discrimination, particularly from private organisations.” The conclusion? It’s “justifiable and proportionate, given the legitimate aim of ensuring the economic wellbeing of the country.”

Read that again. The government looked at the possibility of increased discrimination from automated systems, weighed it against economic growth, and decided the trade-off was acceptable. I find that sentence remarkable for its honesty, if nothing else.

The reason is quite clear. As AI entrepreneurs and investors decide where to develop and grow their services, regions with flexible regulations are more likely to attract their money. The UK wants to remain competitive — and it’s not alone in thinking this way. The EU is having its own conversations about easing rules to support AI development. Everyone is racing to find the balance between protecting people and not scaring off investment. The UK has simply moved first.

Then there’s the “meaningful human involvement” question. Article 22A says a decision is solely automated if there’s no meaningful human involvement. But everyone in the industry knows what happens in practice. A human reviewer gets a recommendation from an algorithm, glances at it, clicks approve. Slaughter and May’s analysis describes the legislation as significantly loosening ADM restrictions for AI deployment. The ICO has said that rubber-stamping doesn’t count as meaningful involvement. But enforcement is another matter entirely. The gap between what the ICO says and what organisations do is where a lot of privacy law actually lives.

If you’re running automated systems that affect people, the practical upshot is straightforward: audit what you have, make sure the four safeguards are in place, and if you also operate in the EU, don’t assume your UK compliance covers you there. It doesn’t.

The ICO’s updated guidance on automated decision-making is expected in Spring 2026. That will clarify some of the open questions. The Commencement No. 6 Regulations were published just days before the 5 February implementation date, which tells you something about how smoothly this was all managed.

The DUAA isn’t a catastrophe. It isn’t a revolution either. It’s a quiet, deliberate shift in the relationship between automated systems and the people they affect. The UK has decided that the default should be to let the machines decide, with rights for individuals to push back afterwards. Whether those rights are meaningful enough will depend on how organisations implement them, how the ICO enforces them, and what the Secretary of State does with those Article 22D powers.

Do these rules change much? I think they change the thing that matters most: who has to make the first move.

Author

  • Scott Dooley is a seasoned entrepreneur and data protection expert with over 15 years of experience in the tech industry. As the founder of Measured Collective and Kahunam, Scott has dedicated his career to helping businesses navigate the complex landscape of data privacy and GDPR compliance.

    With a background in marketing and web development, Scott brings a unique perspective to data protection issues, understanding both the technical and business implications of privacy regulations. His expertise spans from cookie compliance to implementing privacy-by-design principles in software development.

    Scott is passionate about demystifying GDPR and making data protection accessible to businesses of all sizes. Through his blog, he shares practical insights, best practices, and the latest developments in data privacy law, helping readers stay informed and compliant in an ever-changing regulatory environment.

    View all posts