How We Prevent the Spread of False Information on Snapchat
September 8, 2022
How We Prevent the Spread of False Information on Snapchat
September 8, 2022
With the midterm elections approaching in the United States, we wanted to highlight our longstanding approach to preventing the spread of false information on Snapchat, and the steps we continue to take to build on our strong foundation of preventing the spread of false information on our platform.
Our efforts have always started with the architecture of our platform. With Snapchat, we wanted to build something different to capture the spontaneity and fun of real-life conversations. From the beginning, we’ve built safety and privacy into the fundamental design of our platform. That’s why Snapchat opens directly to a camera, instead of a feed of content, and is focused on connecting people who are already friends in real life. We have always wanted Snapchatters to be able to express themselves and have fun with their friends — without the pressure to grow a following, gain views, or earn likes. Snapchat reflects how we normally communicate face to face, or on the phone, because digital communication on Snapchat deletes by default. Across Snapchat, we limit the ability for unmoderated content to reach a large audience. We do this by holding amplified content to a higher standard to make sure that it complies with our content guidelines. While Snapchat has evolved over the years, we have always tried to build technology that enables creativity and prioritizes the safety, privacy, and well-being of our community.
In addition to our foundational architecture, there are a number of key policies that that help us to prevent the spread of false information on Snapchat:
Our policies have long prohibited the spread of false information. Both our Community Guidelines, which apply equally to all Snapchatters, and our content guidelines, which apply to our Discover partners, prohibit spreading false information that can cause harm — including, for example, conspiracy theories, denying the existence of tragic events, unsubstantiated medical claims, or undermining the integrity of civic processes. This includes sharing media that is manipulated to be misleading about real-life events (including harmful deepfakes or shallow-fakes).
Our approach to enforcing against content that includes false information is straightforward: we remove it. When we find content that violates our guidelines, our policy is to take it down, which immediately reduces the risk of it being shared more widely.
Across our app, we don’t allow unvetted content the opportunity to ‘go viral.’ Snapchat does not offer an open newsfeed where people or publishers can broadcast false information. Our Discover platform features content from vetted media publishers, and our Spotlight platform is proactively moderated before content is eligible to reach a large audience. We offer Group Chats, which are limited in size, are not recommended by algorithms, and are not publicly discoverable on our platform if you are not a member of that Group.
We use human review processes to fact-check all political and advocacy ads. All political ads, including election-related ads and issue advocacy ads, must include a transparent “paid for” message that discloses the sponsoring organization, and we provide access to information about all ads that pass our review in our Political Ads Library. In connection with U.S. elections, we partner with the nonpartisan Poynter Institute to independently fact-check political ad statements. In addition, to help mitigate the risks of foreign interference in elections, we prohibit the purchase of political ads from outside the country in which the ad will run.
We are committed to increasing transparency into our efforts to combat false information. Our most recent Transparency Report, which covered the second half of 2021, included several new elements, including data about our efforts to enforce against false information globally. During this period, we took action against 14,613 pieces of content and accounts for violations of our policies on false information — and we plan to provide more detailed breakdowns of these violations in our future reports.
To build on this, ahead of the midterm elections, we have also established dedicated internal processes for information-sharing and for monitoring the effectiveness of our policies and other harm mitigation efforts, ensuring we can calibrate our approach as needed. We are also engaging actively with researchers, NGOs, and other stakeholders from across the election integrity, democracy, and information integrity communities to ensure our safeguards are responsibly anchored in the wider context of emerging trends and informed by expert perspectives.
We’re also focused on partnering with experts to promote greater information integrity. Through our Discover content platform, we focus on providing credible and accurate news coverage to our community, from publishers like The Wall Street Journal, The Washington Post, VICE, and NBC News.
We’ve also developed an expansive array of in-app resources to connect users with civic information, including about opportunities to register to vote, or even run for local office.
Doing our part to promote a responsible information environment remains a major priority across our company, and we will continue to explore innovative approaches to reach Snapchatters where they are, while strengthening our efforts to protect Snapchat from the risks of viral false information.