UK Technology Companies and Child Safety Officials to Test AI's Ability to Generate Exploitation Content

Technology companies and child safety agencies will receive permission to assess whether AI tools can produce child exploitation material under new British laws.

Significant Rise in AI-Generated Harmful Content

The declaration came as findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will permit designated AI companies and child safety organizations to inspect AI systems – the underlying systems for chatbots and image generators – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Fundamentally about preventing abuse before it happens," stated Kanishka Narayan, noting: "Experts, under strict protocols, can now identify the danger in AI models early."

Tackling Regulatory Challenges

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such content as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This legislation is designed to preventing that issue by enabling to halt the creation of those materials at source.

Legal Structure

The changes are being added by the government as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI systems designed to generate exploitative content.

Practical Impact

This recently, the minister toured the London base of a children's helpline and listened to a mock-up conversation to counsellors featuring a report of AI-based abuse. The call portrayed a teenager requesting help after being blackmailed using a explicit deepfake of themselves, constructed using AI.

"When I learn about young people facing blackmail online, it is a source of extreme anger in me and justified concern amongst families," he stated.

Alarming Statistics

A leading online safety organization stated that instances of AI-generated abuse material – such as webpages that may contain multiple images – had more than doubled so far this year.

Instances of the most severe material – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "represent a vital step to ensure AI tools are secure before they are released," commented the chief executive of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the ability to make possibly endless amounts of advanced, lifelike child sexual abuse material," she added. "Content which further commodifies victims' trauma, and renders children, particularly girls, more vulnerable on and off line."

Counseling Interaction Data

Childline also published information of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:

  • Using AI to evaluate body size, body and looks
  • AI assistants discouraging children from talking to trusted adults about abuse
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated pictures

During April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing using AI assistants for assistance and AI therapy applications.

Adrian Carrillo
Adrian Carrillo

A passionate gamer and tech enthusiast who shares insights on gaming strategies and digital security.