🔗 Share this article UK Technology Companies and Child Protection Officials to Test AI's Ability to Create Exploitation Content Technology companies and child safety agencies will be granted authority to evaluate whether artificial intelligence tools can generate child exploitation images under recently introduced British laws. Significant Rise in AI-Generated Harmful Content The declaration came as revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025. New Legal Structure Under the changes, the government will allow approved AI companies and child protection organizations to examine AI models – the foundational technology for conversational AI and image generators – and verify they have adequate safeguards to stop them from producing depictions of child sexual abuse. "Fundamentally about preventing exploitation before it happens," stated the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the danger in AI models early." Addressing Regulatory Challenges The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it. This law is designed to averting that issue by enabling to halt the creation of those images at their origin. Legislative Structure The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a ban on possessing, producing or sharing AI models developed to create exploitative content. Practical Impact This week, the official toured the London headquarters of a children's helpline and listened to a simulated call to advisors featuring a account of AI-based exploitation. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI. "When I learn about young people experiencing blackmail online, it is a source of extreme frustration in me and justified anger amongst parents," he stated. Alarming Data A prominent online safety foundation stated that cases of AI-generated abuse material – such as webpages that may contain multiple files – had more than doubled so far this year. Instances of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086. Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025 Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025 Industry Response The law change could "represent a crucial step to ensure AI tools are safe before they are launched," commented the head of the online safety foundation. "AI tools have enabled so victims can be victimised repeatedly with just a few clicks, giving offenders the ability to create potentially endless amounts of sophisticated, lifelike exploitative content," she added. "Content which further commodifies victims' trauma, and renders young people, especially girls, less safe both online and offline." Counseling Session Data Childline also released details of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions comprise: Employing AI to rate body size, physique and appearance AI assistants discouraging children from consulting safe adults about harm Facing harassment online with AI-generated content Online blackmail using AI-manipulated images During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year. Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including utilizing chatbots for support and AI therapy apps.