Age Verification Is No Longer Optional: How Regulators Are Raising the Bar

Scott Dooley
6 min read · Mar 11, 2026

In early 2026, regulators on both sides of the Atlantic fined platforms, blocked laws, and rewrote policy — all focused on one question: are you doing enough to verify whether your users are children?

Three enforcement actions in February and March 2026 make the direction of travel unmistakable. Self-declaration tick boxes are no longer a defensible approach to age verification. Regulators expect more, and organisations that don’t adapt face serious financial consequences.

Three Enforcement Actions That Changed the Rules

ICO Fines Reddit £14.47 Million

On 24 February 2026, the ICO issued Reddit with a £14.47 million fine for children’s privacy failures. Until July 2025, Reddit had no measures in place to check the age of its users. It processed under-13s’ data without a lawful basis and failed to carry out a Data Protection Impact Assessment for children’s data.

When Reddit did introduce age checks in July 2025, they relied on self-declaration — users stating their own age. The ICO deemed this insufficient. Information Commissioner John Edwards put it plainly: “Relying on users to declare their age themselves is not enough when children may be at risk.”

Reddit has said it intends to appeal.

California Fines PlayOn Sports $1.1 Million

On 3 March 2026, the California Privacy Protection Agency (CPPA) announced a $1.1 million settlement with PlayOn Sports, a ticketing platform used by approximately 1,400 California schools. PlayOn tracked users and served targeted ads to ticket holders, including students, without obtaining opt-in consent for sharing data of consumers aged 13 to 15.

When users wanted to opt out, PlayOn directed them to third-party industry tools rather than providing a direct mechanism. The CPPA found this insufficient. As part of the settlement, PlayOn must now conduct board-level risk assessments going forward.

This case matters because it extends enforcement beyond social media platforms into operational tools used by schools — services that might not immediately seem child-facing but clearly process children’s data.

ICO Fines Imgur’s Owner £247,590

On 5 February 2026, the ICO fined MediaLab, the owner of image-sharing platform Imgur, £247,590 for children’s privacy failures. MediaLab had no age verification measures at all. Children were able to access the platform without restriction and were exposed to harmful content. Like Reddit, MediaLab failed to carry out a DPIA.

The relatively modest fine reflects the organisation’s size and turnover, but the principle is identical to the Reddit case: having no age verification is a clear breach.

Self-Declaration Is Not Enough — What Regulators Now Expect

The Reddit fine draws a line in the sand on age self-declaration. But enforcement is only part of the picture. Regulators and governments are actively building new frameworks for age assurance.

The FTC Creates a Safe Harbour for Age Verification Tech

On 25 February 2026, the US Federal Trade Commission issued a COPPA policy statement (voted 2-0) creating a safe harbour for age verification technology. The statement addresses a long-standing catch-22: to verify a user’s age, you need to collect data from them, which itself raises privacy concerns.

Under the new policy, the FTC will not penalise operators who collect data solely to verify age, provided they delete it promptly and do not reuse it for other purposes. This removes a significant barrier for organisations that want to implement proper age checks but were concerned about liability from the verification process itself.

UK Government Consultation on Children’s Online Safety

On 2 March 2026, the UK government launched a consultation covering social media, AI chatbots, and gaming platforms. The proposals include minimum age requirements, overnight curfews, screen time limits, and restrictions on addictive features. This goes well beyond data protection into product design and platform governance.

Legal Complexity Remains

Not every regulatory effort has succeeded. On 27 February 2026, a federal court blocked Virginia’s social media time-limit law (SB 854) in NetChoice v. Jones on First Amendment grounds. This highlights the tension between child protection goals and constitutional rights — a tension that will continue to shape legislation in the US.

The blocked law is a reminder that the regulatory path is not always straightforward, but the direction remains consistent: more scrutiny, not less.

What This Means for Your Organisation

If your service is “likely to be accessed by children” — the test used by the ICO’s Age Appropriate Design Code — these enforcement actions apply to you. That threshold is broader than many organisations realise. You do not need to be a children’s platform for the rules to apply.

Self-declaration is no longer a defensible position. Saying “our terms of service prohibit under-13s” offers no meaningful protection if you have not taken steps to enforce it.

Here is what you should be doing now:

  1. Conduct a DPIA specifically addressing children’s data risks. Both the Reddit and Imgur cases involved failures to carry out DPIAs. This is a baseline requirement, not an optional step.
  2. Implement age assurance beyond self-declaration. The ICO has made clear that asking users to state their own age is legally insufficient. Explore age estimation, age verification services, or other technical measures proportionate to your risk.
  3. Ensure you have a lawful basis for processing children’s data. For under-13s, this typically means verifiable parental consent. Reddit’s fine was partly for processing children’s data with no lawful basis at all.
  4. Provide direct opt-out mechanisms. The PlayOn Sports case shows that directing users to third-party industry tools for opt-out does not satisfy regulators. You need to offer a straightforward way for users to exercise their rights within your own service.
  5. Document board-level risk assessments. Both the CPPA and ICO expect organisations to demonstrate senior oversight of children’s data risks. This is not just an IT or legal team responsibility.

Where This Is Heading

The pattern across these cases is consistent. Regulators are no longer accepting passive approaches to children’s data. Whether it is the ICO in the UK, the CPPA in California, or the FTC at federal level, the message is the same: if children can access your service, you need to prove you have taken active steps to protect them.

Organisations that still rely on age self-declaration or terms-of-service restrictions are carrying serious regulatory risk. The fines issued in early 2026 are not outliers — they are signals of a sustained enforcement priority.

Getting age assurance right is no longer a nice-to-have. It is a business priority with direct financial consequences. If you need to build data protection awareness across your organisation, Measured Collective provides practical GDPR and data protection training designed for teams that need to understand these risks and act on them.

Author

  • Scott Dooley is a seasoned entrepreneur and data protection expert with over 15 years of experience in the tech industry. As the founder of Measured Collective and Kahunam, Scott has dedicated his career to helping businesses navigate the complex landscape of data privacy and GDPR compliance.

    With a background in marketing and web development, Scott brings a unique perspective to data protection issues, understanding both the technical and business implications of privacy regulations. His expertise spans from cookie compliance to implementing privacy-by-design principles in software development.

    Scott is passionate about demystifying GDPR and making data protection accessible to businesses of all sizes. Through his blog, he shares practical insights, best practices, and the latest developments in data privacy law, helping readers stay informed and compliant in an ever-changing regulatory environment.

    View all posts