AI on Snapchat: Improved Transparency, Safety and Policies

16 April 2024

When Lenses arrived in 2015, augmented reality (AR) technology brought magic to life before our eyes, revolutionising what we thought was possible. Today, over 300 million Snapchatters engage with AR each day on average, as we’ve come to expect this type of technology in our everyday camera experience. 

Now, recent advancements in AI are opening up limitless and breathtaking possibilities, overhauling what we thought was possible yet again. 

Already, there are so many inspiring ways for Snapchatters to express themselves using AI, whether they’re creating an original Generative AI Chat Wallpaper to personalise the look of a conversation with a friend, transforming themselves in imaginative ways with AI-powered Lenses, or learning about the world through conversations with My AI. We are so excited by this technology’s potential to help our community continue to unlock their creativity and imagination. 

AI Transparency

We believe Snapchatters should be informed about the types of technologies they’re using, whether they’re creating fun visuals or learning through text-based conversations with My AI. 

We use contextual icons, symbols and labels in-app, to provide contextual transparency to Snapchatters when they’re interacting with a feature that is powered by AI technology. For example, when a Snapchatter shares an AI-generated Dreams image, the recipient sees a context card with more information. Other features, like the extend tool which leverages AI to make a Snap appear more zoomed out, are demarcated as an AI feature with a sparkle icon for the Snapchatter creating the Snap. 

We also take great care to vet all political ads through a rigorous human review process, including a thorough check for any misleading use of content, including AI to create deceptive images or content. 

Soon, we will be adding a watermark to AI-generated images. It will appear on images created with Snap’s generative AI tools when the image is exported or saved to camera roll. Recipients of an AI-generated image made on Snapchat may see a small ghost logo with the widely recognised sparkle icon beside it. The addition of these watermarks will help inform those viewing it that the image was made with AI on Snapchat. 

Standardised Safety Testing & Protocols 

We take seriously our responsibility to design products and experiences that prioritise privacy, safety and age appropriateness. Like all of our products, AI-powered features have always undergone strict review to ensure they adhere to our safety and privacy principles – and through our learnings over time, we’ve developed additional safeguards:


AI red-teaming is an increasingly common tactic used to test and identify potential flaws in AI models and AI-enabled features, and implement solutions to improve safety and consistency of AI outputs. 

We have been an early adopter of novel AI red-teaming methods for generative image models, partnering with HackerOne on more than 2,500 hours of work to test the efficacy of our strict safeguards. 

Safety Filtering & Consistent Labelling

As we’ve expanded the Generative AI-enabled experiences available on Snapchat, we’ve established responsible governance principles and improved our safety mitigations as well. 

We’ve created a safety review process to detect and remove potentially problematic prompts in the earliest stages of development of AI Lens experiences styled by our team. All of our AI Lenses that generate an image from a prompt go through this process before they’re finalised and become available to our community. 

Inclusive Testing

We want Snapchatters from all walks of life to have equitable access and expectations when using all features within our app, particularly our AI-powered experiences. 

With this in mind, we’re implementing additional testing to minimise potentially biased AI results. 

Continued Commitment to AI Literacy

We believe in the tremendous potential for AI technology to improve our community’s ability to express themselves and connect with one another – and we’re committed to continuing to improve upon these safety and transparency protocols. 

While all of our AI tools, both text-based and visual, are designed to avoid producing incorrect, harmful or misleading material, mistakes may still occur. Snapchatters are able to report content, and we appreciate this feedback. 

Finally, as part of this continued commitment to helping our community better understand these tools, we now have additional information and resources on our Support Site.

Back to News