Sharing our Response to Civil Society Groups on Election Integrity

April 22, 2024

Earlier this month, Snap, along with other major tech companies, received a letter from more than 200 civil society organizations, researchers, and journalists urging us to increase our efforts to protect the integrity of elections in 2024. We appreciate their advocacy and share their commitment to ensuring that people around the world can participate in their elections, while doing everything we can to help protect our democracy.

Given the importance of these issues, and the deep responsibility we feel to the hundreds of millions of people who use Snapchat to communicate with their friends and family and learn more about the world through our content, we felt it was important to release our response publicly. You can read our letter below, and learn more about our plans for this year’s election here.

***

April 21, 2024

Dear civil society organizations:

Thank you for your ongoing vigilance and advocacy in this year of unprecedented electoral activity around the world. We’re grateful for the opportunity to share more about how Snap is approaching our responsibilities in this environment, and how these efforts map to the longstanding values of our company. 

Overview of the Snapchat Approach

Our approach to election-related platform integrity is layered. At a high level, the core elements include:

  • Intentional product safeguards;

  • Clear and thoughtful policies; 

  • Diligent approach to political ads;

  • Collaborative, coordinated operations; and

  • Offering tools and resources to empower Snapchatters.


Taken together, these pillars underpin our approach to mitigating a broad range of election-related risks, while also ensuring Snapchatters have access to tools and information that support participation in democratic processes throughout the world. 

1. Intentional Product Safeguards

From the outset, Snapchat was designed differently from traditional social media. Snapchat doesn’t open to a feed of endless, unvetted content, and it doesn’t allow people to live stream. 

We’ve long recognized that the greatest threats from harmful digital disinformation stem from the speed and scale at which some digital platforms enable it to spread. Our platform policies and architecture limit the opportunities for unvetted or unmoderated content to achieve meaningful scale unchecked. Instead, we pre-moderate content before it can be amplified to a large audience, and broadly limit the distribution of news and political information unless it comes from trusted publishers and creators (including, for example, media organizations like The Wall Street Journal and The Washington Post in the US, Le Monde in France, and Times Now in India). 

Over this past year, the introduction of generative AI features on Snapchat has been met with the same level of intention. We limit our AI products’ abilities to generate content or imagery that could be used to undermine civic processes or deceive voters. Our chatbot, My AI, for example, may provide information about political events or context surrounding social issues; it is programmed not to offer opinions on political candidates or encourage Snapchatters to vote for a particular outcome. And in our text-to-image features, we’ve adopted system-level restrictions on the generation of risky content categories, including the likeness of known political figures. 

For more than a decade now, and across multiple election cycles, our product architecture has played a central role in creating a highly inhospitable environment for actors working to disrupt civic processes or undermine the information environment. And evidence suggests that it works well. Our most recent data indicates that from January 1 to June 30, 2023, the total number of enforcements globally for harmful false information (including risks to election integrity) represented 0.0038% of total content enforced, falling within the lowest likelihood categories of harm on our platform.

We will continue to bring a product-forward approach to our platform integrity efforts in 2024, including our commitments as signatories to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.

2. Clear and Thoughtful Policies

To complement our product safeguards, we’ve implemented a range of policies that function to advance safety and integrity in the context of high-profile events like elections. Our Community Guidelines expressly prohibit, for example, harmful false information, hate speech, and threats or calls to violence. 

On the topic of harmful content in connection with elections, our external policies are robust and informed by leading researchers in the field of information integrity. They spell out specific categories of harmful content that are prohibited, including:

  • Procedural interference: misinformation related to actual election or civic procedures, such as misrepresenting important dates and times or eligibility requirements for participation;

  • Participation interference: content that includes intimidation to personal safety or spreads rumors to deter participation in the electoral or civic process;

  • Fraudulent or unlawful participation: content that encourages people to misrepresent themselves to participate in the civic process or to illegally cast or destroy ballots; and

  • Delegitimization of civic processes: content aiming to delegitimize democratic institutions on the basis of false or misleading claims about election results, for example.

We also provide internal guidance to ensure that our moderation teams understand the ways that election risks often intersect with other categories of harm, including hate speech, misogyny, targeted harassment, or even impersonation.

All of our policies apply to any form of content on our platform, whether user-generated or AI-generated. 1 We also make clear that all policies apply equally to all Snapchatters, irrespective of their prominence. In all cases, our approach to harmful deceptive content is straightforward: we remove it. We don’t label it, we don’t downrank it; we take it down. Snapchatters who violate our content rules receive a strike and a warning message; if they persist in such violations, they may lose their account privileges (though all Snapchatters are provided an opportunity to appeal our enforcement decision). 

3. Diligent Approach to Political Ads

As a platform that permits political advertising in connection with democratic elections, we’ve taken care to adopt rigorous practices to mitigate risks to election integrity. Most notably, every political ad on Snapchat is human-reviewed and fact-checked before it is eligible for placement on our platform. To support these efforts, we partner as needed with Poynter and other International Fact Checking Network-member organizations to provide independent, nonpartisan assessments of whether advertisers’ claims can be substantiated. Our vetting process for political ads includes a thorough check for any misleading use of AI to create deceptive images or content.

To support transparency, an ad must clearly disclose who paid for it. And under our Political Ad Policies, we don’t allow ads to be paid for by foreign governments or any individuals or entities located outside of the country where the election is taking place. We believe it’s in the public’s interest to see which political ads are approved to run and keep a Political Ads Library that includes information about targeting, costs, and other insights.  

To ensure compliance with all of these processes, our Commercial Content Policies disallow influencers from promoting paid political content outside of traditional ad formats. This ensures that all paid political content is subject to our ad review practices and disclaimer requirements.

4. Collaborative, Coordinated Operations

At Snap we take a highly collaborative approach to operationalizing our election integrity safeguards. Internally, we have convened a cross-functional election integrity team, including misinformation, political advertising, and cybersecurity experts, to monitor all relevant developments in connection with elections throughout the world in 2024. The breadth of representation in this group reflects our whole-of-company approach we take to safeguarding platform integrity, with representatives from Trust & Safety, Content Moderation, Engineering, Product, Legal, Policy, Privacy Operations, Security, and others.

Across our content moderation and enforcement, we maintain language capabilities commensurate with all countries in which Snap operates. We have also operationalized a crisis response protocol, to ensure operational agility in the face of high-risk global events.

This spirit of coordination extends to external collaborations as well. We routinely engage with democracy stakeholders and civil society organizations for advice, research insights, and to hear concerns or receive incident escalations. (Many signatories to your letter remain valued partners to us for these purposes.) We often brief governments and elections officials on our approach to platform integrity. We also participate in multi stakeholder initiatives, as we did this year, for example, working with civil society, elections authorities, and fellow industry stakeholders to help shape the Voluntary Election Integrity Guidelines for Technology Companies. And we welcome additional opportunities to engage constructively with all stakeholders in support of mitigating digital risks to civic processes. 

5. Offering tools and resources to empower Snapchatters

At Snap, we have always believed that civic engagement is one of the most powerful forms of self-expression. As a platform that helps people express themselves and has significant reach with new and first-time voters, we make it a priority to help our community get access to accurate and trusted information about news and world events, including where and how they can vote in their local election.

In 2024, these efforts will focus on three pillars that have remained constant throughout the years: 

  • Education: Provide factual and relevant content about elections, candidates, and issues through our content and talent partnerships on Discover.

  • Registration: Encourage Snapchatters to register to vote leveraging third-party credible civic infrastructure. 

  • Engagement: Create excitement and energy in-app around civics and encourage Snapchatters to vote before/on Election Day. 


Many of these plans are currently in the works for 2024, but they will build on many of the successes we’ve had over the years with connecting Snapchatters with informative resources.

Conclusion

At such a consequential moment both for democracies around the world and the acceleration of powerful new technologies, it is as important as ever that platforms are transparent about their values. And on this point, our values could not be clearer: we reject any abuse of our platform that threatens to undermine civic processes or poses a risk to Snapchatters’ safety.  We’re proud of our record to date, but we must continue to remain vigilant to election-related risks. To that end, we thank you again for your constructive engagement on these issues, 

Sincerely, 

Kip Wainscott

Head of Platform Policy

Back to News

1

It’s worth noting that sharing AI-generated or AI-enhanced content on Snapchat is not against our policies, and certainly not something we understand to be inherently harmful. For many years now, Snapchatters have found joy in manipulating imagery with fun Lenses and other AR experiences, and we’re excited about the ways that our community can use AI to express themselves creatively. If, however, the content is deceptive (or otherwise harmful), we will of course remove it, irrespective of the degree to which AI technology may have played a part in its creation.

1

It’s worth noting that sharing AI-generated or AI-enhanced content on Snapchat is not against our policies, and certainly not something we understand to be inherently harmful. For many years now, Snapchatters have found joy in manipulating imagery with fun Lenses and other AR experiences, and we’re excited about the ways that our community can use AI to express themselves creatively. If, however, the content is deceptive (or otherwise harmful), we will of course remove it, irrespective of the degree to which AI technology may have played a part in its creation.