🔗 Share this article British Tech Firms and Child Protection Officials to Test AI's Capability to Create Abuse Content Tech firms and child safety organizations will be granted authority to assess whether AI systems can produce child exploitation material under new British legislation. Significant Increase in AI-Generated Harmful Material The declaration coincided with findings from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025. New Regulatory Structure Under the changes, the government will allow approved AI developers and child safety organizations to inspect AI systems – the foundational technology for conversational AI and image generators – and ensure they have adequate safeguards to prevent them from creating images of child sexual abuse. "Ultimately about preventing abuse before it happens," stated Kanishka Narayan, adding: "Experts, under rigorous protocols, can now identify the risk in AI systems early." Addressing Regulatory Obstacles The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it. This legislation is designed to preventing that issue by helping to stop the production of those images at their origin. Legal Framework The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or distributing AI models designed to generate child sexual abuse material. Real-World Consequences This week, the official toured the London base of Childline and heard a mock-up conversation to advisors involving a report of AI-based abuse. The interaction portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, created using AI. "When I learn about young people experiencing blackmail online, it is a cause of extreme anger in me and rightful anger amongst parents," he said. Concerning Statistics A prominent online safety foundation reported that instances of AI-generated exploitation material – such as webpages that may contain numerous files – had significantly increased so far this year. Instances of the most severe content – the gravest form of exploitation – rose from 2,621 visual files to 3,086. Female children were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025 Depictions of infants to toddlers rose from five in 2024 to 92 in 2025 Industry Reaction The law change could "constitute a vital step to guarantee AI tools are secure before they are launched," commented the chief executive of the online safety foundation. "Artificial intelligence systems have enabled so victims can be targeted all over again with just a few clicks, providing offenders the ability to create possibly endless amounts of sophisticated, photorealistic exploitative content," she added. "Content which additionally exploits victims' trauma, and renders young people, especially girls, more vulnerable both online and offline." Counseling Interaction Data The children's helpline also released information of support sessions where AI has been referenced. AI-related harms discussed in the conversations include: Employing AI to rate weight, physique and appearance Chatbots dissuading children from consulting safe guardians about harm Facing harassment online with AI-generated content Online extortion using AI-manipulated pictures Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, four times as many as in the same period last year. Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using AI assistants for assistance and AI therapeutic apps.