British Tech Firms and Child Safety Officials to Examine AI's Capability to Generate Abuse Content
Tech firms and child protection organizations will receive permission to evaluate whether AI tools can produce child exploitation material under recently introduced British legislation.
Substantial Increase in AI-Generated Illegal Content
The declaration coincided with findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the government will permit approved AI companies and child protection organizations to inspect AI models – the foundational systems for conversational AI and image generators – and ensure they have sufficient safeguards to stop them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the danger in AI models early."
Addressing Legal Obstacles
The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is designed to averting that issue by enabling to halt the creation of those images at their origin.
Legal Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or sharing AI systems developed to create exploitative content.
Real-World Impact
This recently, the minister visited the London base of a children's helpline and heard a simulated call to advisors involving a report of AI-based abuse. The interaction depicted a adolescent seeking help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about children experiencing extortion online, it is a source of intense anger in me and rightful anger amongst parents," he stated.
Concerning Statistics
A leading internet monitoring organization reported that instances of AI-generated abuse material – such as online pages that may include multiple images – had more than doubled so far this year.
Cases of category A content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to ensure AI products are secure before they are launched," commented the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, giving offenders the ability to create possibly limitless quantities of sophisticated, lifelike exploitative content," she added. "Material which additionally commodifies victims' trauma, and renders children, especially girls, more vulnerable on and off line."
Support Interaction Data
The children's helpline also released information of support interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Using AI to rate body size, body and appearance
- Chatbots discouraging young people from talking to safe guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-faked pictures
Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated terms were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including using chatbots for support and AI therapeutic applications.