Inside terrifying underworld of paedophiles using AI to make and sell vile child sex abuse images

PAEDOPHILES are using AI to create and sell vile child sexual abuse images, it has been revealed.

Software intended to aid artists and graphic designers is being misused by abusers to generate the sick content.

A BBC investigation also found that the disgusting images are being hosted on mainstream content-sharing sites, including Patreon.

Those creating the material are reported to be using an AI programme called Stable Diffusion.

It works by users inputting a block of text describing the image they want, which the programme then generates.

In the case of child sexual abuse material (CSAM), it is being used to create realistic "pseudo-images" of the sexual assault and rape of children, including babies and toddlers.

READ MORE UK NEWS

These are then posted online, with many cartoon images appearing on Pixiv, a social media platform based in Japan where sexualised drawings of children are not illegal.

Others appear on Patreon, where users offer "uncensored" images for as little as £6.50.

Patreon said they have a "zero-tolerance" policy on CSAM, while Pixiv said that they had banned all photo-realistic images of that nature.

Octavia Sheepshanks, who led the investigation, said: "Since AI-generated images became possible, there has been this huge flood… it's not just very young girls, they're [paedophiles] talking about toddlers.

Most read in The Sun

"The volume is just huge, so people [creators] will say 'we aim to do at least 1,000 images a month."

Meanwhile, police emphasise the fact that, even if no real children are abused to create the content, these pseudo-images are still illegal to possess, publish or transfer in the UK.

A spokesperson for Patreon said: "We already ban AI-generated synthetic child exploitation material.

"Creators cannot fund content dedicated to sexual themes involving minors."

They added that they were "very proactive" in their efforts to keep this sort of content off the platform.

Anna Edmundson, head of policy and public affairs at the NSPCC, said: "The speed with which these emerging technologies have been co-opted by abusers is breath-taking but not surprising, as companies who were warned of the dangers have sat on their hands while mouthing empty platitudes about safety.

"Tech companies now know how their products are being used to facilitate child sexual abuse and there can be no more excuses for inaction."

A spokesperson for Stability AI, which developed the Diffusion programme, said: "We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes."

They also explained that their policies prohibit the use of Diffusion to create CSAM.

READ MORE SUN STORIES

The Government said that the Online Safety Bill, which is progressing through Parliament, will require companies to work proactively to prevent all forms of CSAM from appearing on their platforms.

This includes "grooming, live-streaming, child sexual abuse material and prohibited images of children", they added.

ncG1vNJzZmivp6x7tbTErKynZpOke7a3jqecsKtfZ395gY9wcG5npJq%2Fs7XFsqCnn12lrqawzqmfoqSVqHqitYycn6KklGLApsSMmpmuq5Vk