Child Sexual Abuse Pictures Found in Database That’s Used to Train AI Image Generators
A dataset used to train AI image generators contains thousands of suspected images of child sexual abuse, according to a new report.
An investigation by the Stanford University Cyber Policy Center found that the LAION-5B database, which has been used to train several AI image generators including Stable Diffusion 1.5, an earlier version of an AI model by Stability AI, contained over 3,200 images of suspected child abuse.
Just over 1,000 of those images were confirmed to be child sexual abuse material, with the report warning that their presence in the dataset may allow generative AI tools built on this data to be used to create new child abuse content.
LAION-5B is a massive public archive of around five billion images scraped from the open web.
It has been used by a variety of AI companies, which require huge amounts of data to train generative AI models that can produce new images in seconds.
Experts have long warned that AI image generators risk unleashing a tsunami of ultra-realistic AI-generated images of child sexual abuse, with the Internet Watch Foundation (IWF) warning that such images are already widely circulating on the dark web.
Online safety organizations in the UK, meanwhile, have called for “urgent action” over instances of children using AI image generators at school to create indecent content of their fellow pupils.
Source:Business Insider
© Copyright RawNews1st