In a shocking case of child exploitation, a man named Justin Culmo was arrested in mid-2023 for creating thousands of illegal images of children using an AI model called Stable Diffusion. Culmo, who has been indicted for various child exploitation crimes in Florida, including abuse of his own daughters, admitted to filming minors at Disney World and a middle school. Law enforcement officials, including former Department of Homeland Security agent Jim Cole, have highlighted the dangers of AI being used for such nefarious purposes.
The case involving Culmo is just one example of AI image manipulation being used to victimize children. Despite the seriousness of the allegations, Disney has claimed that they were not contacted by law enforcement regarding the activities at their park. Global law enforcement agencies have been pursuing Culmo since 2012, with detectives using facial recognition technology to identify his victims and trace manipulated images back to him. It is a disturbing trend where AI is increasingly being used to create realistic images of child abuse.
Recent cases involving individuals like army soldier Seth Herrera and Wisconsin resident Steven Anderegg have also brought attention to the use of generative AI tools in producing sexualized images of children. The Internet Watch Foundation reported detecting over 3,500 AI child sexual abuse material images online this year alone. Stable Diffusion 1.5 has become a popular tool among pedophiles due to its ability to run on their own computers without storing illegal images on external servers, making it harder to detect.
Researchers have found that early versions of Stable Diffusion were trained on illicit images of minors, raising concerns about how AI tools are developed and monitored. While Stability AI, the company behind the tool, has invested in features to prevent misuse, critics argue that more needs to be done to prevent AI from being used in illegal activities. The government is exploring ways to prosecute AI-enabled child exploitation cases, with charges likely to be equivalent to standard child sexual abuse material cases, and potentially falling under American obscenity laws.
As the Department of Justice takes a hard line on prosecuting AI-enabled criminal conduct, the message to potential offenders is clear: using AI to perpetuate crimes against children will not go unpunished. The case of Justin Culmo serves as a stark reminder of the dangers posed by the misuse of AI technology, and the importance of vigilance in detecting and preventing such crimes. Child protection experts like Jim Cole emphasize the need for stronger safeguards and monitoring to prevent the exploitation of vulnerable children through the use of AI tools.