What's happened
The UK government has introduced legislation permitting AI developers and child protection agencies to test AI models for the potential to generate child sexual abuse images. This aims to prevent such content from being created, amid rising reports of AI-generated abuse material, especially targeting girls and infants.
What's behind the headline?
The legislation marks a significant shift in AI regulation, emphasizing prevention over reaction. By allowing trusted organizations to test AI models early, the UK aims to embed safeguards against the creation of illegal content. This proactive approach could set a global precedent, encouraging other nations to regulate AI safety at the source. However, the move also raises questions about enforcement and the potential for misuse if testing protocols are not strictly managed. The focus on safeguarding children, especially girls and infants, underscores the severity of the issue, but the effectiveness of mandatory testing will depend on rigorous oversight and industry compliance. Overall, this law will likely reduce the production of AI-generated abuse images, but its success hinges on implementation and international cooperation.
What the papers say
The Mirror reports that the UK government’s amendment to the Crime and Policing Bill will permit AI developers and child protection organizations to test models for risks of generating illegal content, with the aim of preventing abuse before it occurs. The Guardian highlights that this move is driven by a doubling of reports of AI-generated child sexual abuse material, with a particular surge in depictions of infants and girls. Sky News emphasizes that the new rules will enable organizations like the IWF to scrutinize AI models without legal repercussions, aiming to embed safety into AI development from the outset. All sources agree that the legislation is a pioneering step, but experts like the NSPCC advocate for mandatory testing to ensure maximum protection, warning that optional measures may fall short of safeguarding children effectively.
How we got here
Current UK law criminalizes possession and creation of child sexual abuse material, which limits AI safety testing. Reports of AI-generated abuse images have more than doubled in 2025, prompting the government to amend legislation. The new law will enable proactive testing of AI models to identify risks before content appears online, addressing concerns raised by organizations like the IWF and NSPCC.
Go deeper
More on these topics
-
The Internet Watch Foundation is a registered charity based in Cambridgeshire, England. It states that its remit is "to minimise the availability of online sexual abuse content, specifically child sexual abuse images and videos hosted anywhere in the worl
-
Elizabeth Louise Kendall is a British Labour Party politician who has been Member of Parliament for Leicester West since 2010.
Kendall was educated at Queens' College, Cambridge where she read history.
-
The National Society for the Prevention of Cruelty to Children is a charity campaigning and working in child protection in the United Kingdom and the Channel Islands.