UK Technology Companies and Child Protection Officials to Examine AI's Ability to Create Exploitation Images
Tech firms and child protection agencies will be granted permission to evaluate whether AI tools can produce child abuse material under new British laws.
Substantial Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the authorities will permit approved AI developers and child safety organizations to examine AI models – the underlying systems for chatbots and image generators – and ensure they have sufficient protective measures to stop them from producing images of child exploitation.
"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now identify the risk in AI systems promptly."
Tackling Regulatory Challenges
The changes have been introduced because it is against the law to create and own CSAM, meaning that AI developers and others cannot generate such images as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that problem by helping to halt the creation of those materials at their origin.
Legal Structure
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI systems designed to generate exploitative content.
Real-World Impact
This recently, the minister toured the London base of Childline and heard a mock-up call to advisors involving a report of AI-based abuse. The interaction portrayed a teenager seeking help after being blackmailed using a explicit deepfake of themselves, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of extreme anger in me and justified concern amongst families," he stated.
Alarming Data
A leading internet monitoring organization reported that cases of AI-generated abuse content – such as webpages that may include multiple files – had significantly increased so far this year.
Cases of the most severe content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a vital step to ensure AI products are secure before they are launched," commented the chief executive of the online safety organization.
"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a few clicks, giving offenders the ability to make potentially endless quantities of advanced, photorealistic child sexual abuse material," she added. "Material which additionally commodifies victims' suffering, and makes children, especially girls, more vulnerable both online and offline."
Support Session Data
The children's helpline also released information of support sessions where AI has been referenced. AI-related harms mentioned in the conversations comprise:
- Using AI to evaluate weight, body and appearance
- AI assistants discouraging children from talking to trusted adults about harm
- Facing harassment online with AI-generated material
- Online extortion using AI-faked pictures
During April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including using chatbots for support and AI therapy applications.