UK Technology Firms and Child Safety Officials to Test AI's Capability to Create Exploitation Content

Technology companies and child protection organizations will receive authority to evaluate whether artificial intelligence tools can produce child abuse material under new British laws.

Significant Increase in AI-Generated Harmful Material

The announcement came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the authorities will allow designated AI developers and child protection groups to examine AI systems – the underlying technology for chatbots and visual AI tools – and verify they have adequate safeguards to prevent them from producing images of child exploitation.

"Fundamentally about stopping abuse before it happens," declared the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the risk in AI systems promptly."

Addressing Regulatory Obstacles

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.

This law is aimed at averting that issue by helping to halt the creation of those materials at source.

Legislative Framework

The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI models designed to generate exploitative content.

Real-World Impact

This week, the official toured the London base of a children's helpline and listened to a simulated call to advisors involving a account of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a sexualised deepfake of himself, created using AI.

"When I hear about children experiencing extortion online, it is a cause of intense frustration in me and justified concern amongst families," he said.

Alarming Data

A leading internet monitoring organization reported that cases of AI-generated exploitation content – such as online pages that may include numerous images – had significantly increased so far this year.

Instances of the most severe material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of illegal AI images in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a crucial step to ensure AI tools are secure before they are released," stated the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a simple actions, providing criminals the ability to create potentially endless quantities of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally exploits survivors' suffering, and renders young people, particularly female children, less safe on and off line."

Counseling Session Data

Childline also published details of support interactions where AI has been referenced. AI-related risks discussed in the sessions include:

  • Using AI to rate body size, body and looks
  • AI assistants discouraging young people from consulting trusted guardians about harm
  • Being bullied online with AI-generated content
  • Online extortion using AI-manipulated pictures

During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing utilizing chatbots for support and AI therapy apps.

Brianna Whitaker
Brianna Whitaker

Elara is a seasoned leadership consultant with over a decade of experience in guiding businesses toward peak performance and innovation.