British Tech Companies and Child Protection Agencies to Examine AI's Ability to Generate Abuse Content

Technology companies and child safety agencies will receive permission to evaluate whether artificial intelligence systems can generate child exploitation material under new British legislation.

Substantial Increase in AI-Generated Illegal Material

The announcement came as findings from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the authorities will permit approved AI developers and child protection groups to examine AI systems – the foundational systems for chatbots and image generators – and verify they have adequate safeguards to stop them from producing depictions of child sexual abuse.

"Ultimately about stopping abuse before it occurs," stated the minister for AI and online safety, adding: "Specialists, under strict conditions, can now identify the danger in AI models early."

Tackling Legal Obstacles

The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to preventing that issue by enabling to halt the production of those materials at source.

Legal Framework

The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or distributing AI systems designed to create exploitative content.

Real-World Impact

This recently, the official toured the London headquarters of a children's helpline and listened to a simulated conversation to counsellors featuring a account of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.

"When I hear about children facing blackmail online, it is a cause of intense anger in me and justified concern amongst families," he said.

Alarming Data

A prominent internet monitoring organization stated that instances of AI-generated abuse content – such as online pages that may include numerous files – had more than doubled so far this year.

Cases of the most severe material – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of illegal AI depictions in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a crucial step to ensure AI tools are secure before they are launched," stated the head of the internet monitoring organization.

"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, giving criminals the capability to create possibly endless amounts of advanced, photorealistic child sexual abuse material," she added. "Content which additionally commodifies survivors' trauma, and renders young people, especially girls, more vulnerable both online and offline."

Counseling Interaction Information

The children's helpline also released information of support sessions where AI has been referenced. AI-related harms mentioned in the conversations include:

  • Using AI to evaluate weight, physique and appearance
  • Chatbots dissuading young people from consulting safe guardians about harm
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-manipulated images

Between April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapeutic applications.

Olivia Smith
Olivia Smith

A passionate esports journalist with over a decade of experience covering major tournaments and gaming trends.