Harmful False or Deceptive Information

Community Guidelines Explainer Series

Updated: August 2023

  • We prohibit spreading false information that causes harm or is malicious, such as denying the existence of tragic events, unsubstantiated medical claims, undermining the integrity of civic processes or manipulating content for false or misleading purposes.

  • We prohibit pretending to be someone (or something) that you're not, or attempting to deceive people about who you are. This includes impersonating your friends, celebrities, brands or other organisations.

  • We disallow spam and deceptive practices, including imitating Snapchat or Snap Inc.



Overview


Doing our part to support a responsible information environment has been a major priority at Snap. Deceptive practices take on many forms, and we know they can undermine trust and pose a threat to Snapchatters’ safety and security. Our policies are intended to reduce the spread of misinformation, and protect users from fraud and spam, in a broad range of circumstances.


What you should expect


Our Community Guidelines relating to Harmful False or Deceptive Information essentially cover two distinct, but related, categories of harm: (1) false information and (2) fraudulent or spammy behaviour.


1. False Information


Content that distorts facts can have harmful consequences for users and for society. We know it can sometimes be tough to know what’s accurate, particularly when it comes to fast-breaking current events, or complicated matters of science, health and world affairs. For this reason, our policies focus not only on whether information is inaccurate or misleading, but also its potential for harm.

There are several categories of information in which the misrepresentation of facts can pose unique dangers. Across these areas, our teams take action against content that is misleading or inaccurate, irrespective of whether the misrepresentations are intentional. In this way, our policies operate against all forms of information threats, including misinformation, disinformation, malinformation and manipulated media. 

Examples of the information categories that we view as particularly vulnerable to harm include the following:

  • Content that denies the existence of tragic events. We prohibit content that disputes, for example, the Holocaust, or denies the events of the Sandy Hook school shooting. Misrepresentations and unfounded conspiracy theories regarding such tragedies may contribute to violence and hate, in addition to harming users whose lives and families have been impacted by such events.

  • Content that promotes unsubstantiated medical claims. We disallow content that, for example, recommends untested therapies for preventing the spread of Covid-19; or that features unfounded conspiracy theories about vaccines. While the field of medicine is ever-changing, and public health agencies may often revise guidance, such credible organisations are subject to standards and accountability and we may look to them to provide a benchmark for responsible health and medical guidance.

  • Content that undermines the integrity of civic processes. Elections and other civic processes play an essential role in the functioning of rights-respecting societies, and also present unique targets for information manipulation. To safeguard the information environment around such events, we enforce our policies to apply to the following types of threats to civic processes:

    • Procedural interference: misinformation related to actual election or civic procedures, such as misrepresenting important dates and times or eligibility requirements for participation.

    • Participation interference: content that includes intimidation to personal safety or spreads rumours to deter participation in the electoral or civic process.

    • Fraudulent or unlawful participation: content that encourages people to misrepresent themselves to participate in the civic process or to illegally cast or destroy ballots.

    • Delegitimisation of civic processes: content aiming to delegitimise democratic institutions on the basis of false or misleading claims about election results, for example.


Our policies against harmful false information are complemented by extensive product design safeguards and advertising rules that limit virality, promote transparency, and elevate the role of authenticity across our platform. For more information on the ways our platform architecture supports these objectives, visit this blog post.

2. Fraudulent or Spammy Behaviour

Fraud and spam can subject Snapchatters to substantial financial harm, cybersecurity risks, and even legal exposure (not to mention unpleasant and annoying experiences). To reduce these risks, we prohibit deceptive practices that undermine trust in our community. 

Prohibited practices include content that promote scams of any kind; get-rich-quick schemes; unauthorised or undisclosed paid content; and the promotion of fraudulent goods or services, including counterfeit goods, documents, or certificates. We also prohibit pay-for-follower promotions or other follower-growth schemes; the promotion of spam applications; and the promotion of multilevel marketing or pyramid schemes.  We also prohibit money laundering (including money couriering or money muling) of any kind. This includes receiving and transferring money that's illegally obtained or from an unknown source on behalf of someone else, unauthorised and illegal money transmission or currency exchange services, and soliciting and promoting these activities. 

Finally, our policies prohibit pretending to be someone (or something) that you’re not, or attempting to deceive people about who you are. This includes impersonating your friends, celebrities, brands or other organisations. These rules also mean that it’s not okay to imitate Snapchat or Snap, Inc. branding. 


How we enforce these policies


Content that violates our rules against Harmful, False or Deceptive Information is removed. Users who share, promote or distribute violating content will be notified of the violation, and users who continue to violate these policies will have their account access restricted. 

In 2022, we expanded our reporting menu categories for false information, enabling users to report social, political and health-related misinformation more specifically. Please do share with us when you or someone else is being impersonated, or if you encounter spam or misinformation. Once we receive a report, our Trust & Safety teams can take action to address the impersonation or prevent harmful content from persisting. 


On our high-reach surfaces, like Spotlight and Discover, we take a very proactive approach to moderating content and promoting information integrity. But we enormously value feedback and reports regarding any harmful content you might encounter on these surfaces; they help alert us to any breakdowns in our processes for keeping these spaces free of harmful information.



Takeaway


Doing our part to promote a responsible information environment remains a major priority across our company, and we will continue to explore innovative approaches to protecting Snapchatters from the risks of Harmful False or Deceptive Content. 

As we continue these efforts, we are committed to providing transparent insights into the effectiveness of our approach. Through our transparency reports, we provide country-level information related to our enforcements against misinformation globally, and we plan to provide more detailed breakdowns of these violations in our future reports.

We are committed to constantly calibrating the operation of our policies to improve our ability to address harmful content or behaviour, and we are committed to working with diverse leaders from across the safety community to ensure we are advancing these objectives responsibly. For more information about our safety efforts, please visit our Privacy and Safety Hub