UK Tech Firms and Child Protection Officials to Test AI's Ability to Create Exploitation Images

Tech firms and child protection agencies will receive permission to evaluate whether AI systems can produce child exploitation material under recently introduced UK laws.

Substantial Rise in AI-Generated Illegal Content

The declaration came as revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the authorities will allow designated AI developers and child safety groups to examine AI models – the underlying systems for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from creating images of child sexual abuse.

"Ultimately about stopping exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now detect the risk in AI systems promptly."

Tackling Regulatory Challenges

The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such content as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.

This law is designed to preventing that problem by enabling to halt the creation of those materials at source.

Legal Framework

The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or sharing AI systems developed to create exploitative content.

Practical Consequences

This week, the official toured the London base of a children's helpline and listened to a mock-up conversation to advisors involving a account of AI-based abuse. The interaction portrayed a adolescent seeking help after being blackmailed using a sexualised AI-generated image of himself, constructed using AI.

"When I learn about young people facing blackmail online, it is a source of intense frustration in me and rightful anger amongst parents," he stated.

Alarming Statistics

A leading internet monitoring foundation stated that instances of AI-generated exploitation content – such as webpages that may contain numerous images – had more than doubled so far this year.

Instances of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
  • Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are launched," stated the head of the online safety foundation.

"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, providing criminals the capability to make potentially limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Material which further commodifies victims' trauma, and renders children, especially female children, more vulnerable on and off line."

Support Session Data

Childline also released information of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions include:

  • Employing AI to rate body size, body and looks
  • AI assistants dissuading children from talking to trusted guardians about harm
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-faked pictures

During April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapy applications.

Desiree Willis
Desiree Willis

Elara is a seasoned casino strategist with over a decade of experience in gaming analysis and player education.