British Tech Companies and Child Safety Officials to Test AI's Ability to Generate Abuse Content

Technology companies and child safety organizations will be granted permission to assess whether AI systems can generate child abuse material under new UK legislation.

Substantial Rise in AI-Generated Illegal Material

The announcement came as findings from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the authorities will permit designated AI companies and child protection groups to examine AI models – the foundational systems for conversational AI and image generators – and verify they have sufficient safeguards to stop them from producing images of child exploitation.

"Fundamentally about preventing abuse before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict protocols, can now detect the risk in AI systems early."

Addressing Legal Challenges

The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to averting that problem by enabling to stop the production of those images at their origin.

Legal Structure

The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI systems designed to generate exploitative content.

Real-World Consequences

This recently, the minister toured the London headquarters of Childline and heard a mock-up call to counsellors involving a report of AI-based exploitation. The call portrayed a adolescent requesting help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.

"When I learn about children facing blackmail online, it is a cause of intense frustration in me and justified concern amongst families," he said.

Concerning Data

A leading internet monitoring organization reported that instances of AI-generated abuse material – such as online pages that may contain multiple images – had significantly increased so far this year.

Instances of category A material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a crucial step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring organization.

"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, giving criminals the capability to create possibly endless amounts of sophisticated, lifelike exploitative content," she continued. "Content which additionally exploits survivors' suffering, and makes young people, especially girls, more vulnerable both online and offline."

Support Interaction Data

Childline also released details of support interactions where AI has been mentioned. AI-related harms discussed in the conversations comprise:

  • Employing AI to rate body size, body and appearance
  • AI assistants discouraging young people from consulting trusted adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated images

During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and related terms were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing utilizing AI assistants for assistance and AI therapy apps.

Charles Patel
Charles Patel

Lena is a passionate writer and tech enthusiast based in Berlin, sharing her experiences and insights on modern life.