Tech giants including Tiktok, Snapchat and Stability AI have all signed a pledge vowing to tackle the despicable rise of AI-generated child sexual abuse images at an event hosted by the Home Secretary Suella Braverman today (30 October).
Charities, tech firms, academics and international government representatives all gathered to focus on how to tackle the threat of child sexual abuse material generated by Artificial Intelligence (AI) after data from the Internet Watch Foundation (IWF) showed that thousands of images depicting the worst kind of abuse could be found on the dark web and is realistic enough to be treated as real imagery under UK law.
At the event, hosted in partnership with the IWF and taking place in the lead up to the government’s AI Safety Summit, the Home Secretary addressed attendees, many of which have come together to sign a statement pledging to cooperate to mitigate the spread of AI-generated images depicting children being abused.
The government is also exploring further investment into the use of AI to combat child sexual abuse, and will continue to examine potential options for innovation to tackle the threat from AI generated child sexual abuse material.
Home Secretary Suella Braverman said:
“Child sexual abuse images generated by AI are an online scourge. This is why tech giants must work alongside law enforcement to clamp down on their spread. The pictures are computer-generated but they often show real people – it’s depraved and damages lives.
“The pace at which these images have spread online is shocking and that’s why we have convened such a wide group of organisations to tackle this issue head-on. We cannot let this go on unchecked.”
The IWF has warned that the increased availability of this imagery not only poses a real risk to the public by normalising sexual violence against children, but some of the imagery is also based on children who have appeared in ‘real’ child sexual abuse material in the past. This means innocent survivors of traumatic abuse are being revictimized.
The surge in AI-generated images could also slow law enforcement agencies from tracking down and identifying victims of child sexual abuse, and detecting offenders and bringing them to justice.
Signatories to the joint statement, including tech giants like TikTok, Snapchat and Stability AI, have pledged to sustain “technical innovation around tackling child sexual abuse in the age of AI”. The statement affirms that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.
Statistics released by the IWF last week showed that in a single month, they investigated more than 11,000 AI images which had been shared on a dark web child abuse forum. Almost 3,000 of these images were confirmed to breach UK law – meaning they depicted child sexual abuse.
Some of the images are based on celebrities, whom AI has ‘de-aged’ and are then depicted being abused. There are even images which are based on entirely innocuous images of children posted online, which AI has been able to ‘nudify’.
Susie Hargreaves OBE, Chief Executive of the IWF, said:
“We first raised the alarm about this in July. In a few short months, we have seen all our worst fears about AI realised.
“The realism of these images is astounding, and improving all the time. The majority of what we’re seeing is now so real, and so serious, it would need to be treated exactly as though it were real imagery under UK law.
“It is essential, now, we set an example and stamp out the abuse of this emerging technology before it has a chance to fully take root. It is already posing significant challenges. It is great to see the Prime Minister acknowledge the threat posed by the creation of child sexual abuse images in his speech last week following the publication of our report.
“We are delighted the government has listened to our calls to make this a top international priority ahead of the AI summit, and are grateful to the Home Secretary for convening such a powerful discussion.”
Chris Farrimond, Director of Threat Leadership at the National Crime Agency (NCA), said:
“We are starting to see realistic images and videos of child sexual abuse created using Artificial Intelligence, and an exponential growth in offenders discussing how to use it to generate images of real children.
“We know that as AI technologies mature and become more widely applied, they will create opportunities for offenders. But there will also be new opportunities for law enforcement and technology platforms to take action that protects children and aids identification of their abusers.
“That is why the NCA is bringing together international law enforcement and industry partners at the Virtual Global TaskForce in Washington next month. It is vital that all of our combined creativity, skills and resources are being utilised to protect our most vulnerable.
“We estimate that there are 680,000 to 830,000 adults in the UK (1.3% to 1.6% of the adult population) that pose some degree of sexual risk to children, which is why tackling child sexual abuse is a priority for the NCA and our policing partners. We will investigate and prosecute individuals who create, share, possess, access or view AI generated child sexual abuse material in the same way as if the image is of a real child.”
Sir Peter Wanless, NSPCC Chief Executive, said:
“AI is being developed at such speed that it’s vital the safety of children is considered explicitly and not as an afterthought in the wake of avoidable tragedy.
“Already we are seeing AI child abuse imagery having a horrific impact on children, traumatising and retraumatising victims who see images of their likeness being created and shared. This technology is giving offenders new ways to organise and risks enhancing their ability to groom large numbers of victims with ease.
“It was important to see child safety on the agenda today. Further international and cross-sector collaboration will be crucial to achieve safety by design.”
The government also recognises that AI can be a powerful tool for good and the Home Secretary emphasised at the event that AI also poses opportunities to improve the way we tackle child sexual abuse. Together with the police and other partners, the Home Office has developed the world-leading Child Abuse Image Database (CAID), which is already using AI to grade the severity of child sexual abuse material.
The AI tool helps police officers sort through large volumes of data at a faster pace, bringing certain images to the surface for the officer to focus on to aid investigations. This enables officers to more rapidly identify and safeguard children, as well as identify offenders. These tools also support the welfare of officers, as they reduce prolonged exposure to these images. Other tools are also in development which will use AI to safeguard children and identify perpetrators more quickly.
While the opportunities posed in this space are promising, AI is advancing much quicker than anyone could have realised.
Without appropriate safety measures that keep pace with its development, this technology still poses significant risks, and that is why the Home Secretary is placing an emphasis on working constructively with a wide range of partners to mitigate these risks and ultimately, protect the public.
This week, the UK is hosting the first ever major global AI Safety Summit at the start of November at Bletchley Park.
The summit will turbocharge global action on the safe and responsible development of frontier AI around the world – bringing together key nations, technology companies, researchers, and civil society groups.