European Union
January 1, 2024 – June 30, 2024

Released:

25 October, 2024

Updated:

29 November, 2024

Welcome to our European Union (EU) transparency page, where we publish EU specific information required by the Digital Services Act (DSA), the Audiovisual Media Service Directive (AVMSD), the Dutch Media Act (DMA), and the Terrorist Content Online Regulation (TCO). Please note that the most up-to-date version of these Transparency Reports can be found in the en-US locale.

Legal Representative 

Snap Group Limited has appointed Snap B.V. as its Legal Representative for purposes of the DSA. You can contact the representative at dsa-enquiries [at] snapchat.com for the DSA, at vsp-enquiries [at] snapchat.com for AVMSD and DMA, at tco-enquiries [at] snapchat.com for TCO, through our Support Site [here], or at:

Snap B.V.
Keizersgracht 165, 1016 DP
Amsterdam, The Netherlands

If you are a law enforcement agency, please follow the steps outlined here.

Please communicate in English or Dutch when contacting us.

Regulatory Authorities

For DSA, we are regulated by the European Commission, and the Netherlands Authority for Consumers and Markets (ACM). For AVMSD and the DMA, we are regulated by the Dutch Media Authority (CvdM). For TCO, we are regulated by the Netherlands Authority for the prevention of Online Terrorist Content and Child Sexual Abuse Material (ATKM).

DSA Transparency Report

Last Updated: 25 October 2024

We publish this report regarding our content moderation efforts on Snapchat in accordance with the transparency reporting requirements provided in Articles 15, 24 and 42 of the European Union (EU)’s Digital Services Act (Regulation (EU) 2022/2065) (“DSA”). Except where otherwise noted, the information contained in this report is for the reporting period from 1 January 2024 to 30 June 2024 (H1 2024) and covers content moderation on the features of Snapchat that are regulated by the DSA. 

We continually strive to improve our reporting. For this reporting period (H1 2024), we have made changes to the structure of our report with new and more differentiated tables to provide improved insight into our content moderation efforts. 

1. Average Monthly Active Recipients 
(DSA Articles 24.2 and 42.3)


As of 1 October 2024, we have 92.9 million average monthly active recipients (“AMAR”) of our Snapchat app in the EU. This means that, on average over the 6-month period ending 30 September 2024, 92.9 million registered users in the EU have opened the Snapchat app at least once during a given month.

This figure breaks down by Member State as follows:

These figures were calculated to meet current DSA requirements and should only be relied on for DSA purposes. We have changed how we calculate this figure over time, including in response to changing internal policy, regulator guidance and technology, and figures are not intended to be compared between periods. This may also differ from the calculations used for other active user figures we publish for other purposes.


2. Member State Authority Requests
(DSA Article 15.1(a))
a) Orders to act against illegal content


During this reporting period (H1 2024), we received zero (0) orders to act against specifically identified pieces of illegal content from EU Member States’ authorities, including those issued in accordance with DSA Article 9. 

Because this number is zero (0), we cannot provide a breakdown per type of illegal content concerned or Member State issuing the order, or the median times to acknowledge receipt or give effect to the orders.

b) Orders to provide information 


During this reporting period (H1 2024), we received the following orders to disclose user data from EU Member States’ authorities, including those issued in accordance with DSA Article 10:


The median time to inform relevant authorities of the receipt of these orders to provide information was 0 minutes — we provide an automated response confirming receipt. 

The median time to give effect to these orders to provide information was ~7 days. This metric reflects the time period from when Snap received an order to when Snap considered the matter to be fully resolved, which in individual cases may depend in part on the speed with which the relevant Member State authority responds to any requests for clarification from Snap necessary to process the order.

Note, we do not provide a breakdown of the above orders to provide information categorized by type of illegal content concerned because this information is not generally available to us.

3. Content Moderation 


All content on Snapchat must adhere to our Community Guidelines and Terms of Service. Certain content must also adhere to additional guidelines and policies. For example, content submitted for algorithmic recommendation to a wider audience on our public broadcast surfaces must meet the additional, higher standards provided in our Content Guidelines for Recommendation Eligibility, while advertisements must comply with our Advertising Policies.

We enforce these policies using technology and human review. We also provide mechanisms for Snapchatters to report violations, including illegal content and activities, directly in-app or through our website. Proactive detection mechanisms and reports prompt a review, which then leverages a mix of automated tools and human moderators to take appropriate action in accordance with our policies. 

We provide further information about our content moderation on our public surfaces in H1 2024 below.

a) Notices submitted in accordance with DSA Article 16 
(DSA Article 15.1(b))

In accordance with DSA Article 16, Snap has put into place mechanisms enabling users and non-users to notify Snap of the presence on Snapchat of specific items of information they consider to be illegal content. They can do so by reporting specific pieces of content or accounts, either directly in the Snapchat app or on our website. 

During the reporting period (H1 2024), we received the following notices submitted in accordance with DSA Article 16 in the EU:


Below, we provide a breakdown reflecting how these notices were processed – i.e., through a process including human review or solely via automated means: 

In submitting notices in-app or through our website, reporters can select a specific reporting reason from a menu of options reflecting the categories of violations listed in our Community Guidelines (e.g., hate speech, drug use or sales). Our Community Guidelines prohibit content and activities that are illegal in the EU, so our reporting reasons largely reflect specific categories of illegal content in the EU. However, to the extent a reporter in the EU believes the content or account they are reporting is illegal for reasons not specifically referenced in our reporting menu, they are able to report it for “other illegal content” and are given an opportunity to explain why they believe what they are reporting is illegal. 

If, upon review, we determine that reported content or account violates our Community Guidelines (including for reasons of illegality), we may (i) remove the offending content, (ii) warn the relevant account holder and apply a strike against the account, and / or (iii) lock the relevant account, as further explained in our Snapchat Moderation, Enforcement, and Appeals Explainer

In H1 2024, we took the following enforcement actions upon receipt of notices submitted in accordance with DSA Article 16 in the EU:

In H1 2024, all reports for “other illegal content” that we actioned were ultimately enforced under our Community Guidelines because our Community Guidelines prohibited the relevant content or activity. We thus categorized these enforcements under the relevant category of Community Guideline violation in the table above.

In addition to the above enforcements, we may take action on content notified to us in accordance with other applicable Snap policies and guidelines: 

  • With respect to content on our public broadcast surfaces, if we determine that the reported content does not meet the higher standards of our Content Guidelines for Recommendation Eligibility, we may reject the content for algorithmic recommendation (if the content does not meet our eligibility criteria), or we may limit the distribution of the content to exclude sensitive audiences (if the content meets our eligibility criteria for recommendation but is otherwise sensitive or suggestive).  

In H1 2024, we took the following actions regarding content on Snapchat’s public broadcast surfaces reported to us in the EU, consistent with our Content Guidelines for Recommendation Eligibility:

  • If we determine that a reported advertisement violates our Advertising Policies, we may remove it upon review. 


In H1 2024, we took the following actions regarding advertisements reported to us in the EU:


b) Content moderation engaged in at Snap’s own initiative 
(Article 15.1(c))


In addition to reviewing notices submitted in accordance with DSA Article 16, Snap moderates, at its own initiative, content on the public surfaces of Snapchat (e.g., Spotlight, Discover). Below we provide information about the content moderation engaged at Snap’s own initiative, including the use of automated tools, the measures taken to provide training and assistance to persons in charge of content moderation, and the number and types of restrictions imposed as a result of those proactive content moderation efforts.


  • Use of automated tools in Snap’s own-initiative moderation


We deploy automated tools to proactively detect and, in some cases, enforce violations of our terms and policies on our public content surfaces. This includes hash-matching tools (including PhotoDNA and Google CSAI Match), Abusive Language Detection models (which detect and reject content based on an identified and regularly updated list of abusive keywords and emojis), and artificial intelligence / machine learning technology. Our automated tools are designed to detect violations of our Community Guidelines (which, among other things, prohibit illegal content) and, where applicable, our Content Guidelines for Recommendation Eligibility and Advertising Policies. 


In H1 2024, all of our proactive detection was conducted using automated tools. When our automated tools detect a potential violation of our policies, they either automatically take action in accordance with our policies, or they create a task for human review. The number and types of restrictions imposed as a result of this process are described below. 


  • Number and types of restrictions imposed at Snap’s own initiative


In H1 2024, Snap took the following enforcement actions after proactively detecting, through the use of automated tools, violations of our Community Guidelines (including violations amounting to illegal content and activities under EU and Member State laws):

In addition, in H1 2024, with respect to content on our public broadcast surfaces, we took the following actions after proactively detecting on Snapchat, through the use of automated tools, violations of our Content Guidelines for Recommendation Eligibility:

* As stated in our Content Guidelines for Recommendation Eligibility, accounts that repeatedly or egregiously violate our Content Guidelines for Recommendation Eligibility may be temporarily or permanently disqualified from recommendations on our public broadcast surfaces. We apply this action in the context of our proactive moderation efforts.

Further, in H1 2024, we took the following actions after proactively detecting on Snapchat, through the use of automated tools, violations of our Advertising Policies:

  • Measures taken to provide training and assistance to persons in charge of content moderation


Our content moderation teams apply our content moderation policies to help protect our Snapchat community. They are trained over a multi-week period, in which new team members are educated on Snap’s policies, tools, and escalation procedures. Our moderation teams regularly participate in refresher training relevant to their workflows, particularly when we encounter policy-borderline and context-dependent cases. We also run upskilling programs, certification sessions, and quizzes to ensure all moderators are current and in compliance with all updated policies. Finally, when urgent content trends surface based on current events, we quickly disseminate policy clarifications so teams are able to respond according to Snap’s policies.


We provide our content moderation teams with significant support and resources, including on-the-job wellness support and easy access to mental health services. 

c) Complaints received through Snap’s internal complaint-handling (i.e., appeals) systems 
(Article 15.1(d))


Users whose accounts are locked by our safety teams for Community Guidelines violations (including for illegal content and activities) can submit a locked account appeal. Users can also appeal certain content moderation decisions.

During the reporting period (H1 2024), Snap processed the following appeals (including appeals against both account locks and content-level moderation decisions) submitted via its internal complaint-handling systems in the EU:

d) Use of automated means for the purpose of content moderation 
(Articles 15.1(e) and 42.2(c))

  • Qualitative description and purposes


As explained above in Section 3(b), we deploy automated tools to proactively detect and, in some cases, enforce violations of our terms and policies on our public content surfaces. This includes hash-matching tools (including PhotoDNA and Google CSAI Match), Abuse Language Detection models (which detect and reject content based on an identified and regularly updated list of abusive keywords and emojis), and artificial intelligence / machine learning technology. Our automated tools are designed to detect violations of our Community Guidelines (which, among other things, prohibit illegal content) and, where applicable, our Content Guidelines for Recommendation Eligibility and Advertising Policies. 

When our automated tools detect a potential violation of our policies, they either automatically take action in accordance with our policies, or they create a task for human review. 

  • Indicators of accuracy and possible rate of error, broken down by Member State


We monitor the accuracy of our automated moderation tools by selecting random samples of tasks processed by our automated tools on our public surfaces and submitting them for re-review by our human moderation teams. The accuracy rate is the percentage of tasks from these random samples that were upheld by our human moderators upon re-review. The error rate is the difference between 100% and the accuracy rate calculated as described above. 

Based on sampling, in H1 2024, the indicators of the accuracy and the possible rate of error of the automated means used across all categories of violations was approximately 93% and the error rate was approximately 7%. 

We do not generally track the language of content we moderate on Snapchat, and thus cannot provide a breakdown of the accuracy and error rates for our automated moderation tools for each official language of the Member States.  As a proxy for this information, we provide below a breakdown of our accuracy and error rates for automatically moderated content originating from each Member State. 

  • Safeguards


We are mindful of the potential impact of automated moderation tools on fundamental rights, and we deploy safeguards to minimize that impact.

Our automated content moderation tools are tested prior to being deployed on Snapchat. Models are tested offline for performance and deployed via A/B testing to ensure their proper functioning prior to being fully phased into production. We perform pre-launch Quality Assurance (QA) reviews, launch reviews, and ongoing precision QA checks during partial (phased) rollouts. 

Following the launch of our automated tools, we evaluate their performance and accuracy on an ongoing basis, and make adjustments as needed. This process involves the re-review of samples of automated tasks by our human moderators to identify models that require adjustments to improve accuracy. We also monitor the prevalence of specific harms on Snapchat via random daily sampling of Public Stories, and leverage this information to identify areas for further improvement. 

Our policies and systems promote consistent and fair enforcement, including by our automated tools, and provide Snapchatters an opportunity to meaningfully dispute enforcement outcomes through notice and appeals processes that aim to safeguard the interests of our community while protecting individual Snapchatter rights.

We strive to continually improve our automated content moderation tools to improve their accuracy and support the consistent and fair enforcement of our policies.

e) Disputes submitted to Article 21 out-of-court dispute settlement bodies
(Article 24.1(a))

For the reporting period (H1 2024), the number of disputes submitted to formally certified out-of-court dispute settlement bodies in accordance with DSA Article 21 was zero (0), and we are unable to provide a breakdown reflecting outcomes, median completion times, or the share of disputes where Snap implemented the decisions of an out-of-court dispute settlement body. 

Note that, in H1 2024, we received two (2) notices of disputes submitted to a body seeking certification as an out-of-court dispute settlement body under DSA Article 21. We did not include these disputes in the count above as the body that transmitted these notices of disputes was unable to verify their certification status upon our request.

f) Suspensions imposed pursuant to Article 23 
(Article 24.1(b))
  • Suspensions pursuant to Article 23.1: suspension of accounts that frequently provide manifestly illegal content 

As explained in our Snapchat Moderation, Enforcement, and Appeals Explainer, accounts we determine are used primarily to violate our Community Guidelines (including through the provision of manifestly illegal content) and accounts that perpetrate serious harms are immediately disabled. For other violations of our Community Guidelines, Snap generally applies a three-part enforcement process:

  • Step one: the violating content is removed.

  • Step two: the Snapchatter receives a notification, indicating that they have violated our Community Guidelines, that their content has been removed, and that repeated violations will result in additional enforcement actions, including their account being disabled.

  • Step three: our team records a strike against the Snapchatter’s account.

Information regarding the number of strikes (i.e., warnings) and locks imposed in H1 2024 on accounts in the EU in relation to content or activities on Snapchat’s public surfaces can be found above, in Sections 3(a) and 3(b).

  • Suspensions pursuant to Article 23.2: suspension on the processing of notices and complaints from individuals, entities and complainants that frequently submit notices or complaints that are manifestly unfounded

Applying our internal definition of “manifestly unfounded” notices and complaints, and our internal thresholds for what we consider to be the frequent submission of such notices and complaints, the number of suspensions on the processing of notices and complaints imposed in H1 2024 pursuant to DSA Article 23.2 is as follows:

4. Information About Our Content Moderation Teams 
a) Human resources dedicated to content moderation, including for compliance with DSA Articles 16, 20 and 22, broken down by official languages of the Member States
(Article 42.2(a))


Our content moderation teams operate across the globe, enabling us to help keep Snapchatters safe 24/7. Below, you will find the breakdown of our human moderation resources by the language specialties of moderators (note that some moderators specialize in multiple languages) as of 30 June 2024:

The above table includes all human resources dedicated to content moderation who support official languages of the EU Member States as of 30 June 2024. In situations where we need additional language support, we use translation services.

b) Qualifications and linguistic expertise of content moderators; training and support provided
(Article 42.2(b))


Moderators are recruited using a standard job description that includes a language requirement (depending on the need). The language requirement states that the candidate should be able to demonstrate written and spoken fluency in the language and have at least one year of  work experience for entry-level positions. Candidates must meet the educational and background requirements in order to be considered. Candidates also must demonstrate an understanding of current events for the country or region of content moderation they will support. 

See above for information on the training and support Snap provides to content moderators, which is separately required under DSA Article 15(1)(c) and thus included in Section 3(b), in the final subsection titled: “Measures taken to provide training and assistance to persons in charge of content moderation

Child Sexual Exploitation and Abuse (CSEA) Media Scanning Report


Background

The sexual exploitation of any member of our community, especially minors, is illegal, abhorrent, and prohibited by our Community Guidelines. Preventing, detecting, and eradicating Child Sexual Exploitation and Abuse (CSEA) on our platform is a top priority for Snap, and we continually evolve our capabilities to combat these and other crimes.


We use PhotoDNA robust hash-matching and Google’s Child Sexual Abuse Imagery (CSAI) Match to identify known illegal images and videos of child sexual abuse, respectively, and report them to the U.S. National Center for Missing and Exploited Children (NCMEC), as required by law. NCMEC then, in turn, coordinates with domestic or international law enforcement, as required.


Report

The below data is based on the result of proactive scanning using PhotoDNA and/or CSAI Match of media uploaded by a user’s camera roll to Snapchat.

Stopping child sexual exploitation is a top priority. Snap devotes significant resources toward this and has zero tolerance for such conduct.  Special training is required to review CSE appeals, and there is a limited team of agents who handle these reviews due to the graphic nature of the content.  During the fall of 2023, Snap implemented policy changes that affected the consistency of certain CSE enforcements, and we have addressed these inconsistencies through agent re-training and rigorous quality assurance.  We expect that the next transparency report will reveal progress toward improving response times for CSE appeals and improving the precision of initial enforcements.  

Content Moderation Safeguards

The safeguards applied for CSEA Media Scanning are set out in the above “Content Moderation Safeguards” section under our DSA Report.


European Union Terrorist Content Online Transparency Report

Published: June 17, 2024

Last Updated: June 17, 2024

This Transparency Report is published in accordance with Articles 7(2) and 7(3) of Regulation 2021/784 of the European Parliament and of the Council of the EU, addressing the dissemination of terrorist content online (the Regulation). It covers the reporting period of January 1 - December 31, 2023


General Information
  • Article 7(3)(a): information about the hosting service provider’s measures in relation to the identification and removal of or disabling of access to terrorist content

  • Article 7(3)(b): information about the hosting service provider’s measures to address the reappearance online of material which has previously been removed or to which access has been disabled because it was considered to be terrorist content, in particular where automated tools have been used


Terrorists, terrorist organizations, and violent extremists are prohibited from using Snapchat. Content that advocates, promotes, glorifies, or advances terrorism or other violent, criminal acts is prohibited under our Community Guidelines. Users are able to report content that violates our Community Guidelines via our in-app reporting menu and our Support Site. We also use proactive detection to attempt to identify violative content on public surfaces like ​​Spotlight and Discover. 


Regardless as to how we may become aware of violating content, our Trust & Safety teams, through a combination of automation and human moderation, promptly review identified content and make enforcement decisions. Enforcements may include removing the content, warning or locking the violating account, and, if warranted, reporting the account to law enforcement. To prevent the reappearance of terrorist or other violent extremist content on Snapchat, in addition to working with law enforcement, we take steps to block the device associated with the violating account and prevent the user from creating another Snapchat account. 


Additional details regarding our measures for identifying and removing terrorist content can be found in our Explainer on Hateful Content, Terrorism, and Violent Extremism and our Explainer on Moderation, Enforcement, and Appeals



Reports & Enforcements 
  • Article 7(3)(c): the number of items of terrorist content removed or to which access has been disabled following removal orders or specific measures, and the number of removal orders where the content has not been removed or access to which has not been disabled pursuant to the first subparagraph of Article 3(7) and the first subparagraph of Article 3(8), together with the grounds therefor


During the reporting period, Snap did not receive any removal orders, nor were we required to implement any specific measures pursuant to Article 5 of the Regulation. Accordingly, we were not required to take enforcement action under the Regulation.


The following table describes enforcement actions taken based on user reports and proactive detection against content and accounts, both in the EU and elsewhere around the world, that violated our Community Guidelines relating to terrorism and violent extremism content

Enforcement Appeals
  • Article 7(3)(d): the number and the outcome of complaints handled by the hosting service provider in accordance with Article 10

  • Article 7(3)(g): the number of cases in which the hosting service provider reinstated content or access thereto following a complaint by the content provider


Because we had no enforcement actions required under the Regulation during the reporting period as noted above, we handled no complaints pursuant to Article 10 of the Regulation and had no associated reinstatements.


The following table contains information relating to appeals and reinstatements, both in the EU and elsewhere around the world, involving terrorist and violent extremist content enforced under our Community Guidelines.

Judicial Proceedings & Appeals
  • Article 7(3)(e): the number and the outcome of administrative or judicial review proceedings brought by the hosting service provider

  • Article 7(3)(f): the number of cases in which the hosting service provider was required to reinstate content or access thereto as a result of administrative or judicial review proceedings


As we had no enforcement actions required under the Regulation during the reporting period, as noted above, we had no associated administrative or judicial review proceedings, and we were not required to reinstate content as a result of any such proceedings.

DSA Risk Assessment

This report has been prepared to comply with Snap’s obligations under Articles 34 and 35 of Regulation (EU) 2022/2065 and provides the results of our assessment of system risks stemming from the design, function and use of Snapchat’s online platforms, together with the methodology used to assess those risks and the mitigation measures put in place to address those risks.


DSA Risk and Mitigation Assessment Report | Snapchat | August 2023 (PDF)


DSA Audit and Audit Implementation

These reports have been prepared to comply with Snap’s obligations under Article 37 of  Regulation (EU) 2022/2065 and provide: (i) the results of the independent audit of Snap’s compliance with the obligations set out in Chapter III of Regulation (EU) 2022/2065 and (ii) the measures taken to implement the operational recommendations from that independent audit.

DSA Independent Audit Report | Snapchat | August 2024 (PDF)

DSA Audit Implementation Report | Snapchat | September 2024 (PDF)