British Technology Companies and Child Protection Agencies to Examine AI's Ability to Create Exploitation Images
Technology companies and child safety organizations will receive authority to evaluate whether AI systems can produce child abuse images under new British laws.
Significant Increase in AI-Generated Harmful Content
The declaration coincided with revelations from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the government will allow designated AI developers and child safety organizations to inspect AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate protective measures to stop them from producing images of child exploitation.
"Ultimately about stopping abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now detect the danger in AI models promptly."
Tackling Regulatory Challenges
The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that problem by helping to halt the production of those images at source.
Legal Framework
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, creating or distributing AI models designed to create child sexual abuse material.
Real-World Consequences
This week, the official toured the London base of a children's helpline and listened to a simulated conversation to advisors featuring a account of AI-based exploitation. The call portrayed a adolescent requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.
"When I learn about children facing blackmail online, it is a source of extreme anger in me and rightful anger amongst families," he said.
Concerning Statistics
A prominent online safety foundation reported that cases of AI-generated exploitation content – such as online pages that may contain numerous images – had more than doubled so far this year.
Cases of category A content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Response
The law change could "represent a vital step to ensure AI tools are secure before they are released," commented the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a simple actions, providing criminals the ability to create potentially limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Material which additionally commodifies survivors' suffering, and makes young people, especially female children, less safe both online and offline."
Support Interaction Data
The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related harms mentioned in the conversations comprise:
- Using AI to evaluate body size, physique and looks
- AI assistants dissuading children from consulting trusted guardians about harm
- Facing harassment online with AI-generated content
- Digital extortion using AI-manipulated images
Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing utilizing AI assistants for support and AI therapeutic applications.