Snap Values

Severe Harm

Community Guidelines Explainer Series

Updated: March 2026

The safety of Snapchatters is a top priority. We take behavior that threatens the safety of our community very seriously, particularly when the threat of harm is severe. We consider severe harm to include both (1) harms that risk significant damage to the physical or emotional well-being of Snapchatters, and (2) the imminent, credible risk of severe harm, including threats to human life, safety, and well-being. We collaborate with experts, safety groups, and law enforcement on these topics in order to better educate ourselves and our community, and to take appropriate action where these threats may arise on our platform.

Certain categories of content and behavior—such as child sexual exploitation, terrorism-related activity, the promotion of dangerous and illicit drugs or weapons, human trafficking, and other serious criminal conduct—frequently meet this threshold, due to the severity and immediacy of the risks involved. We consider these types of harms to merit a heightened level of scrutiny, as well as swift, strict, and permanent consequences for violating accounts. 


When we identify Snapchatters engaging in the following activities, we promptly remove the relevant content, disable their accounts and, in some instances, refer the conduct to law enforcement:

  • Activity that involves sexual exploitation or abuse, including sharing imagery depicting child sexual exploitation for the purpose of sexual gratification, grooming for sexual purposes, child or adult sex trafficking, or sexual extortion (sextortion)

  • Attempted selling, exchanging, or facilitating sales of dangerous and illicit drugs

  • Credible, imminent threats to human life, safety, or well-being, which may include violent extremism or terrorism-related activities, human trafficking, specific threats of violence (such as a bomb threat), or other serious criminal activities   

In addition to enforcing stricter consequences for these violations, we may use proactive detection measures to identify and remove severe and serious violating content more quickly, while respecting applicable privacy restrictions. Our internal teams consistently work with experts to better understand how we can detect and limit threats, prevent harm, and stay informed of potentially harmful trends. Our work on this topic is never finished and it will continue to evolve with the needs of our community. We invite you to report a safety concern, visit our Safety Center, or learn more about our efforts to address harmful content and promote wellness.