Sharing our response to Civil Society Groups on election integrity

22 April 2024

Earlier this month, Snap, along with other major tech companies, received a letter from more than 200 civil society organisations, researchers and journalists urging us to increase our efforts to protect the integrity of elections in 2024. We appreciate their advocacy and share their commitment to ensuring that people around the world can participate in their elections, while doing everything we can to help protect our democracy.

Given the importance of these issues and the deep responsibility we feel to the hundreds of millions of people who use Snapchat to communicate with their friends and family and learn more about the world through our content, we felt it was important to release our response publicly. You can read our letter below and learn more about our plans for this year’s election here.

***

21 April 2024

Dear civil society organisations:

Thank you for your ongoing vigilance and advocacy in this year of unprecedented electoral activity around the world. We’re grateful for the opportunity to share more about how Snap is approaching our responsibilities in this environment, and how these efforts map to the longstanding values of our company. 

Overview of the Snapchat approach

Our approach to election-related platform integrity is layered. At a high level, the core elements include:

  • Intentional product safeguards;

  • Clear and thoughtful policies; 

  • Diligent approach to political ads;

  • Collaborative, coordinated operations; and

  • Offering tools and resources to empower Snapchatters.


Taken together, these pillars underpin our approach to mitigating a broad range of election-related risks, while also ensuring Snapchatters have access to tools and information that support participation in democratic processes throughout the world. 

1. Intentional product safeguards

From the outset, Snapchat was designed differently from traditional social media. Snapchat doesn’t open to a feed of endless, unvetted content, and it doesn’t allow people to live stream. 

We’ve long recognised that the greatest threats from harmful digital disinformation stem from the speed and scale at which some digital platforms enable it to spread. Our platform policies and architecture limit the opportunities for unvetted or unmoderated content to achieve meaningful scale unchecked. Instead, we pre-moderate content before it can be amplified to a large audience, and broadly limit the distribution of news and political information unless it comes from trusted publishers and creators (including, for example, media organisations like The Wall Street Journal and The Washington Post in the US, Le Monde in France and Times Now in India). 

Over this past year, the introduction of generative AI features on Snapchat has been met with the same level of intention. We limit our AI products’ abilities to generate content or imagery that could be used to undermine civic processes or deceive voters. Our chatbot, My AI, for example, may provide information about political events or context surrounding social issues; it is programmed not to offer opinions on political candidates or encourage Snapchatters to vote for a particular outcome. And in our text-to-image features, we’ve adopted system-level restrictions on the generation of risky content categories, including the likeness of known political figures. 

For more than a decade now, and across multiple election cycles, our product architecture has played a central role in creating a highly inhospitable environment for actors working to disrupt civic processes or undermine the information environment. And evidence suggests that it works well. Our most recent data indicates that from 1 January to 30 June 2023, the total number of enforcements globally for harmful false information (including risks to election integrity) represented 0.0038% of total content enforced, falling within the lowest likelihood categories of harm on our platform.

We will continue to bring a product-forward approach to our platform integrity efforts in 2024, including our commitments as signatories to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.

2. Clear and thoughtful policies

To complement our product safeguards, we’ve implemented a range of policies that function to advance safety and integrity in the context of high-profile events like elections. Our Community Guidelines expressly prohibit, for example, harmful false information, hate speech and threats or calls to violence. 

On the topic of harmful content in connection with elections, our external policies are robust and informed by leading researchers in the field of information integrity. They spell out specific categories of harmful content that are prohibited, including:

  • Procedural interference: misinformation related to actual election or civic procedures, such as misrepresenting important dates and times or eligibility requirements for participation;

  • Participation interference: content that includes intimidation to personal safety or spreads rumours to deter participation in the electoral or civic process;

  • Fraudulent or unlawful participation: content that encourages people to misrepresent themselves to participate in the civic process or to illegally cast or destroy ballots; and

  • Delegitimisation of civic processes: content aiming to delegitimise democratic institutions on the basis of false or misleading claims about election results, for example.

Back to News
1 It’s worth noting that sharing AI-generated or AI-enhanced content on Snapchat is not against our policies, and certainly not something we understand to be inherently harmful. For many years now, Snapchatters have found joy in manipulating imagery with fun Lenses and other AR experiences, and we’re excited about the ways that our community can use AI to express themselves creatively. If, however, the content is deceptive (or otherwise harmful), we will of course remove it, irrespective of the degree to which AI technology may have played a part in its creation.