UK Tech Firms and Child Safety Officials to Examine AI's Capability to Generate Exploitation Content

Tech firms and child protection organizations will receive authority to evaluate whether AI tools can produce child abuse material under new UK legislation.

Substantial Increase in AI-Generated Harmful Material

The declaration coincided with revelations from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the government will allow designated AI developers and child safety organizations to examine AI models – the underlying technology for conversational AI and image generators – and verify they have adequate safeguards to stop them from creating depictions of child exploitation.

"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the risk in AI systems early."

Addressing Regulatory Obstacles

The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.

This law is designed to preventing that issue by helping to halt the production of those images at source.

Legal Framework

The amendments are being added by the government as revisions to the crime and policing bill, which is also implementing a ban on possessing, producing or distributing AI models developed to create exploitative content.

Practical Consequences

This recently, the minister toured the London base of a children's helpline and heard a simulated conversation to advisors involving a account of AI-based abuse. The interaction portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, constructed using AI.

"When I hear about children experiencing blackmail online, it is a source of intense frustration in me and rightful anger amongst parents," he stated.

Alarming Data

A prominent internet monitoring organization reported that cases of AI-generated abuse content – such as online pages that may contain numerous files – had more than doubled so far this year.

Cases of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
  • Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a vital step to ensure AI products are secure before they are released," commented the head of the online safety organization.

"AI tools have made it so survivors can be targeted all over again with just a simple actions, providing criminals the capability to create potentially endless amounts of sophisticated, lifelike exploitative content," she continued. "Material which further exploits survivors' suffering, and renders young people, especially girls, more vulnerable both online and offline."

Support Interaction Information

Childline also published information of support sessions where AI has been mentioned. AI-related risks discussed in the sessions comprise:

  • Using AI to evaluate weight, physique and looks
  • AI assistants dissuading children from consulting safe adults about abuse
  • Being bullied online with AI-generated content
  • Digital extortion using AI-faked pictures

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including using chatbots for support and AI therapy applications.

Gregory Nelson
Gregory Nelson

A seasoned esports analyst and coach with over a decade of experience in competitive gaming strategies.