New Research On How People Are Interacting With GenAI Sexual Content

November 19, 2024

The rapid rise of AI tools in recent years has and will continue to create new opportunities for creativity, learning, and connection. However, the technology has also introduced new dynamics to existing online risks. New research shows that as the number of people who encounter sexually charged AI-images and videos online continues to grow, awareness of the illegality of some of this content continues to be a challenge.

To gain a better understanding of the attitudes and behaviors of teens and young adults across all platforms and services, Snap conducts and shares annual industry-wide research called our Digital Well-Being Index.  (Snap commissioned the research, but it covers Generation Z’s experiences across digital spaces generally, with no specific focus on Snapchat.)  While we are planning to release the full results of our Year Three study in conjunction with international Safer Internet Day in February 2025, we want to preview some key findings on how teens, young adults, and even parents are engaging with and reacting to generative AI-based sexual content. We’re doing so today, in light of the global focus on child sexual exploitation and abuse this week, and in conjunction with our participation in the Empowering Voices DC Summit, which focused on addressing the harms associated with AI-generated sexual content. 

For example, in our study, which surveyed 9,007 teens, young adults, and parents of teens across 6 countries 1, 24% said they had seen some sort of AI-generated images or videos that were sexual in nature. Of those who claimed to have seen this type of content, only 2% said the imagery was of someone less than 18 years old.

Encouragingly, when people saw this type of content, 9 out of 10 took some action, ranging from blocking or deleting the content (54%) to speaking to trusted friends or family (52%). However, only 42% said they reported the content to the platform or service where they saw it, or to a hotline / helpline. This insight follows a larger trend of lower reporting rates on digital safety-related issues generally. We pointed out, in an earlier post, the importance of counteracting negative perceptions of reporting so that young people do not normalize exposure to certain problematic content and conduct online, or equate reporting with tattle-taling. 

Even more alarming is that more than 40% of respondents were unclear on the legal obligation for platforms/services to report sexual images of minors, even if such images are intended as jokes or memes. And, while a larger number (70%+) recognized it was illegal to use AI technology to create fake sexual content of a person, or to retain, view, or share sexual images of minors, these findings indicate there is considerable work to do to ensure the general public is aware of the legal requirements related to this type of content.

In the U.S., for example, nearly 40% of respondents said they believe it is legal to use AI technology to create fake sexual images of a person. And, anecdotally, we have heard of a concerning trend from industry colleagues: with the proliferation of this type of content, some teen girls in particular are feeling “left out” if they are not featured in AI-manipulated sexual imagery that their peers are inappropriately creating and sharing. This disturbing point further underscores the need to educate and grow awareness of this specific online risk, with trusted adults and informed peers playing an active role in discouraging this type of behavior.  

Snap’s ongoing commitment

At Snap, we are continually investing in resources, tools, and technology to help foster safer, healthier, and more positive experiences on Snapchat and across the tech ecosystem.

In some cases, we use behavioral “signals” to identify potentially illegal activity so that we can proactively remove bad actors and report them to authorities. Moreover, as a service that includes a conversational AI chatbot, we take efforts to be extra vigilant in preventing the potential generation of such material on Snapchat, as well as guarding against the sharing and distribution of material that may have been generated on other platforms. We treat suspected AI-generated sexual imagery of minors the same as “authentic” child sexual exploitation and abuse imagery (CSEAI), removing the content once we become aware of it, suspending the violating account, and reporting it to the National Center for Missing and Exploited Children (NCMEC). This is in addition to leveraging and deploying technology designed to prevent the spread of CSEAI, including PhotoDNA (to detect duplicates of known illegal images) and Google’s CSAI Match (to detect duplicates of known illegal videos). We also recently began using Google’s Content Safety API (to aid in detecting novel, “never-before-hashed” imagery on public content). We have also engaged with NCMEC on how to leverage the unique digital signatures (or “hashes”) of the 4,700 reports they received last year related to child sexual abuse material that involved GenAI. 

We collaborate with law enforcement, supporting their investigations, and invest heavily in our global Trust and Safety and Law Enforcement Operations teams that work 24/7 to help keep our community safe. We host annual summits for law enforcement in the U.S., aiming to ensure officers and agencies know how to take appropriate action against any illegal activity that may be taking place on our platform. 

We also continue to expand our in-app reporting tools, which include options for our community to flag nudity and sexual content, and specifically CSEAI. Reporting problematic content and accounts is critical in helping tech companies remove bad actors from their services and thwart further activity before it potentially causes harm to others. 

More recently, we added new features to our Family Center suite of tools, which parents can use to better understand how their teen is using Snapchat, including our AI chatbot. We also released new resources to help educators and school administrators understand how their students use Snapchat and the resources we offer to assist schools in their efforts to create safe and supportive environments for students.

And, we are continuing to invest in ways to raise public and Snapchatter awareness of online sexual harms. Our in-app “Safety Snapshot” episodes focus on sexual risks, including topics such as child online grooming and trafficking. We were also the first entity to support Know2Protect, a U.S. Department of Homeland Security campaign focused on educating and empowering young people, parents, trusted adults, and policymakers about online child sexual abuse. 

We look forward to continuing to work with all types of stakeholders -- parents, young people, educators, and policymakers to name a few -- on these types of whole-of-society issues, and hope that the insights from our cross-platform research help to create new ideas and opportunities to ensure people are aware of existing and new online threats, and the resources available to help combat these risks. 

— Viraj Doshi, Platform Safety Lead

Natrag na Vijesti

1

Countries included in the study are: Australia, France, Germany, India, the UK, and the U.S.

1

Countries included in the study are: Australia, France, Germany, India, the UK, and the U.S.