Snap Values

Election Integrity

Snap is committed to safeguarding the integrity of democratic debate and protecting the online information space.

Our approach to election-related platform integrity is layered. At a high level, the core elements include:

  • Intentional product safeguards;

  • Clear and thoughtful policies; 

  • Transparent Content Moderation and Reporting Mechanisms

  • Diligent approach to political ads;

  • Collaborative, coordinated operations; and

  • Offering tools and resources to empower Snapchatters.

Taken together, these pillars underpin our approach to mitigating a broad range of election-related risks, while also ensuring Snapchatters have access to tools and information that support participation in democratic processes throughout the world. 


1. Intentional Product Safeguards

From the outset, Snapchat was designed differently from traditional social media. Snapchat doesn’t open to a feed of endless, unvetted content, and it doesn’t allow people to live stream. 

We’ve long recognized that the greatest threats from harmful digital disinformation stem from the speed and scale at which some digital platforms enable it to spread. Our platform policies and architecture limit the opportunities for unvetted or unmoderated content to achieve meaningful scale unchecked. Instead, we pre-moderate content before it can be amplified to a large audience, and broadly limit the distribution of news and political information unless it comes from trusted publishers and creators (including, for example, media organizations like The Wall Street Journal and The Washington Post in the US, Le Monde in France, and Times Now in India). 

The introduction of generative AI features on Snapchat has been met with the same level of intention. Our chatbot, My AI, for example, may provide information about political events or context surrounding social issues; it is programmed not to offer opinions on political candidates or encourage Snapchatters to vote for a particular outcome. And in our text-to-image features, we apply clear, visual indicators that content is AI-generated, such as watermarks on downloaded content.  

For more than a decade now, and across multiple election cycles, our product architecture has played a central role in creating a highly inhospitable environment for actors working to disrupt civic processes or undermine the information environment. And evidence suggests that it works well as outlined in our Transparency Report. We will continue to bring a product-forward approach to our platform integrity efforts.


2. Clear and Thoughtful Policies

To complement our product safeguards, we’ve implemented a range of policies that function to advance safety and integrity in the context of high-profile events like elections. Our Community Guidelines expressly prohibit, for example, harmful false information, hate speech, and threats or calls to violence. 

On the topic of harmful content in connection with elections, our external policies are robust and informed by leading researchers in the field of information integrity. They spell out specific categories of harmful content that are prohibited, including:

  • Procedural interference: misinformation related to actual election or civic procedures, such as misrepresenting important dates and times or eligibility requirements for participation;

  • Participation interference: content that includes intimidation to personal safety or spreads rumors to deter participation in the electoral or civic process;

  • Fraudulent or unlawful participation: content that encourages people to misrepresent themselves to participate in the civic process or to illegally cast or destroy ballots; and

Delegitimization of civic processes: content aiming to delegitimize democratic institutions on the basis of false or misleading claims about election results, for example.

3. Transparent Content Moderation and Reporting Mechanisms

We use proactive and reactive content moderation mechanisms to ensure the safety of our users. We encourage users to report any content that may violate our rules. Reporting tools integrated into Snapchat and available on our support website enable users to easily flag content that may be misleading, manipulative, or otherwise contrary to our policies.

Reports are reviewed by our teams, taking into account the context, the electoral period, and applicable law.

When content or an account violates our rules, we may take proportionate measures, including:

  • removal of the content;

  • enforcement actions against the originating account, up to and including suspension;

  • removal of political advertisements that do not comply with our policies.

We provide internal guidance to ensure that our moderation teams understand the ways that election risks often intersect with other categories of harm, including hate speech, misogyny, targeted harassment, or even impersonation.

All of our policies apply to any form of content on our platform, whether user-generated or AI-generated. We also make clear that all policies apply equally to all Snapchatters, irrespective of their prominence. In all cases, our approach to harmful deceptive content is straightforward: we remove it when we find it. We don’t label it, we don’t downrank it; we take it down. Snapchatters who violate our content rules receive a strike and a warning message; if they persist in such violations, they may lose their account privileges (though all Snapchatters are provided an opportunity to appeal our enforcement decision).

4. Diligent Approach to Political Ads

As a platform that permits political advertising in connection with democratic elections, we’ve taken care to adopt rigorous practices to mitigate risks to election integrity. Political ads on Snapchat are human-reviewed and fact-checked before they are eligible for placement on our platform. To support these efforts, we partner as needed with International Fact Checking Network-member organizations to provide independent, nonpartisan assessments of whether advertisers’ claims can be substantiated. Our vetting process for political ads includes a thorough check for any misleading use of AI to create deceptive images or content.

To support transparency, an ad must clearly disclose who paid for it. And under our Political Ad Policies, we don’t allow ads to be paid for by foreign governments or any individuals or entities located outside of the country where the election is taking place. We believe it’s in the public’s interest to see which political ads are approved to run and keep a Political Ads Library that includes information about targeting, costs, and other insights.  

To ensure compliance with all of these processes, our Commercial Content Policies disallow influencers from promoting paid political content outside of traditional ad formats. This ensures that all paid political content is subject to our ad review practices and disclaimer requirements.

5. Collaborative, Coordinated Operations

At Snap we take a highly collaborative approach to operationalizing our election integrity safeguards. Internally, we have convened a cross-functional election integrity team, including misinformation, political advertising, and cybersecurity experts, to monitor all relevant developments in connection with elections. The breadth of representation in this group reflects our whole-of-company approach we take to safeguarding platform integrity, with representatives from Trust & Safety, Content Moderation, Engineering, Product, Legal, Policy, Privacy Operations, Security, and others.

Across our content moderation and enforcement, we maintain language capabilities commensurate with all countries in which Snap operates. We have also operationalized a crisis response protocol, to ensure operational agility in the face of high-risk global events.

This spirit of coordination extends to external collaborations as well. We routinely engage with democracy stakeholders and civil society organizations for advice, research insights, and to hear concerns or receive incident escalations.We often brief governments and elections officials on our approach to platform integrity. We also participate in multi stakeholder initiatives, for example, working with civil society, elections authorities, and fellow industry stakeholders to help shape the Voluntary Election Integrity Guidelines for Technology Companies. And we welcome additional opportunities to engage constructively with all stakeholders in support of mitigating digital risks to civic processes.

6. Offering tools and resources to empower Snapchatters

At Snap, we have always believed that civic engagement is one of the most powerful forms of self-expression. As a platform that helps people express themselves and has significant reach with new and first-time voters, we make it a priority to help our community get access to accurate and trusted information about news and world events, including where and how they can vote in their local election. Examples of these efforts include: 

  • Education: Provide factual and relevant content about elections, candidates, and issues through our content and talent partnerships on Discover.

  • Registration: Encourage Snapchatters to register to vote leveraging third-party credible civic infrastructure. 

  • Engagement: Create excitement and energy in-app around civics and encourage Snapchatters to vote before/on Election Day. 

Conclusion

Snap’s values could not be clearer: we reject any abuse of our platform that threatens to undermine civic processes or poses a risk to Snapchatters’ safety.  We’re proud of our record to date, and we will continue to remain vigilant to election-related risks.

Resources for upcoming Elections

Users in France can find more information on the upcoming election at: Élections municipales 2026 : les règles fixées par l’ARCOM