Below we include definitions of commonly used terms, policies, and operational practices discussed in our transparency report. 

Sexual Content: Refers to the promotion or distribution of sexual nudity, pornography, or commercial sexual services. For more information, please review our explainer on Sexual Content. Note, for purposes of our transparency reports, we define and track data regarding Child Sexual Exploitation and Abuse separately from other types of Sexual Content. 


Harassment and Bullying: Refers to any unwanted behavior that could cause an ordinary person to experience emotional distress, such as verbal abuse, sexual harassment, or unwanted sexual attention. This category also includes the sharing or receipt of non-consensual intimate imagery (NCII). For more information, please review our explainer on Harassment & Bullying.

Threats & Violence: Refer to content that expresses the intention to cause serious physical or emotional harm. Violence refers to any content that attempts to incite, glorify, or depict human violence, animal abuse, gore, or graphic imagery. For more information, please review our explainer on Threats, Violence, and Harm.

Self-Harm & Suicide: Refers to the glorification of self-harm, including the promotion of self-injury, suicide or eating disorders. For more information, please review our explainer on Threats, Violence, and Harm.

False Information: False Information includes false or misleading content that causes harm or is malicious, such as denying the existence of tragic events, unsubstantiated medical claims, or undermining the integrity of civic processes, or manipulating content for false or misleading purposes, including through generative AI or through deceptive editing. For more information, please review our explainer on Harmful False or Deceptive Information.

Impersonation: Occurs when an account is falsely pretending to be associated with another person or brand. For more information, please review our explainer on Harmful False or Deceptive Information.


Spam: Spam refers to unsolicited messages or irrelevant shared content that is likely to cause harmful confusion or otherwise pose a risk or nuisance to legitimate users. For more information, please review our explainer on Harmful False or Deceptive Information.


Drugs: Drugs refers to distribution and use of illegal drugs (including counterfeit pills), and other illicit activity involving drugs. For more information, please review our explainer on Illegal or Regulated Activities.


Weapons: Refer to implements designed or used for inflicting death, bodily harm or property damage. For more information, please review our explainer on Illegal or Regulated Activities.

Other Regulated Goods: Refers to the promotion of regulated goods or industries, including illegal gambling, tobacco products, and alcohol. This category also includes illegal or dangerous activities, which may promote or encourage behavior that may include criminal behavior or pose a serious risk to an individual’s life, safety or well-being. For more information, please review our explainer on Illegal or Regulated Activities.

Hate Speech: Content that demeans, defames or promotes discrimination or violence towards, an individual or group of individuals on the basis of their race, color, caste, ethnicity, national origin, religion, sexual orientation, gender identity, disability, veteran status, immigration status, socio-economic status, age, weight, or pregnancy status. For more information, please review our explainer on Hateful Content, Terrorism, and Violent Extremism.

Child Sexual Exploitation and Abuse: Child Sexual Exploitation and Abuse is defined as content that contains sexual images of a minor and all forms of child sexual exploitation and abuse imagery (CSEAI), as well as grooming or enticement of a minor for any sexual purpose. We report all instances of child sexual exploitation and abuse to authorities. For more information, please review our explainer on Sexual Content.

Terrorism & Violent Extremism: Refers to content that promotes or supports terrorism or other violent, criminal acts committed by individuals and/or groups to further ideological goals, such as those of a political, religious, social, racial, or environmental nature. It includes any content that promotes or supports any foreign terrorist organization or violent extremist hate group, as well as content that advances recruitment for such organizations or violent extremist activities. For more information, please review our explainer on Hateful Content, Terrorism, and Violent Extremism.

Content & Account Reports: Total number of content pieces reported and accounts reported to Snap via our in-app reporting menu. Note that content includes photos, videos, and chats.

Enforcement (Enforced): An action taken against a piece of content or an account (e.g. deletion, warning, locking). Note that content includes photos, videos, and chats. Reported content violations may be actioned by human agents or automation (where high-precision automation is possible).

Total Content Enforced: The total number of pieces of content (e.g., Snaps, Stories) that were enforced against on Snapchat. 

Total Unique Accounts Enforced: The total number of unique accounts that were enforced against on Snapchat. For example, if a single account was enforced against multiple times for various reasons (e.g., a user was warned for posting false information and then later had their account locked for harassing another user), only one account would be calculated in this metric as having been enforced. Both enforcement actions would, however, be included in our “Overview of Content and Account Violations” table, with one unique account enforcement for “False Information” and one unique account enforcement for “Harassment and Bullying.”

% of the Total Reports Enforced by Snap: This value shows the percentage of pieces of content and accounts enforced within a policy reason divided by total pieces of content and accounts enforced across all policy reasons. 

Turnaround Time: The time between when our Trust & Safety teams first receive a report (usually when a report is submitted) to the last enforcement action timestamp. If multiple rounds of review occur, the final time is calculated at the last action taken.

Violative View Rate (VVR): VVR is the percentage of Story and Snap views that contained violating content, as a proportion of all Story and Snap views across Snapchat. For example, if our VVR is 0.03%, that means out of every 10,000 Snap and Story views on Snapchat, 3 contained content that violated our policies. This metric allows us to understand what percentage of Snap and Story views on Snapchat come from content that violates our Community Guidelines (that was either reported or proactively enforced on).

Appeal: An appeal occurs when a user submits a request for us to re-review an account-locking enforcement decision. For example, we may remove an account that violated our harassment policy. A user may disagree with our assessment and submit an appeal for us to reconsider our decision. 


Reinstatement: A reinstatement is a reversal of the original moderation decision made in response to an appeal. Upon receiving an appeal, we will review and assess whether our initial enforcement action was correct. If we determine that we made a mistake in enforcing the piece of content or account within the guidelines of our platform policies, we would reinstate the appealed content or account to our platform.