2024 年 12 月 5 日
2024 年 12 月 5 日
我們每年發佈兩次透明度報告,提供關於 Snap 安全工作的深入見解。 我們致力於這些工作,並不斷努力使這些報告更全面地提供更多資訊,為許多深切關注我們的內容審核、執法實踐以及 Snapchat 社群安全與福祉的利益相關者提供這些報告。
本透明度報告涵蓋 2024 年上半年(1 月 1 日 - 6 月 30 日)。 與我們之前的報告一樣,我們分享了有關我們的信任與安全團隊在違反社群規範的特定類別中收到並處置的應用程式內內容與帳戶報告的全球數量;我們如何回應執法機關和政府的要求; 以及我們如何回應版權和商標侵權通知。 我們還在本頁底部連結的文件中提供特定於國家/地區的見解分析。
作為我們持續致力於不斷改進透明度報告的一部分,我們還引入了新的數據,突出顯示我們為檢測和打擊更廣泛違反社群規範的行為而採取的努力。 我們已在本報告中納入全球和國家層面的這些數據,並將繼續這樣做。 我們還更正了之前報告中的一項錯誤標記:我們之前提到「處置內容總數」,我們現在改稱「處置總數」,以反映以下事情:相關列中提供的數據包括內容以及帳戶的處置。
有關我們打擊潛在在線危害的政策,以及繼續改進報告的計劃,請參閱我們最近關於本透明度報告的安全與影響部落格。 如欲尋找 Snapchat 上安全和隱私的更多資源,請參見頁面底部的「透明度報告」分頁。
請注意,最新版的透明度報告語言為英文 (美國)。
我們的信任與安全團隊為執行社群規範而採取的行動概述
我們的信任與安全團隊主動(透過使用自動化工具)與被動(因應檢舉)執行我們的社群規範,本報告以下部分將進一步詳細介紹。 在本報告週期 (2024 年上半年),我們的信任與安全團隊採取了以下處置:
Below is a breakdown per type of Community Guidelines violations concerned, including the median turnaround time between the time we detected the violation (either proactively or upon receipt of a report) and the time we took final action on the relevant content or account:
During the reporting period, we saw a Violative View Rate (VVR) of 0.01 percent, which means that out of every 10,000 Snap and Story views on Snapchat, 1 contained content found to violate our Community Guidelines.
Community Guidelines Violations Reported to Our Trust & Safety Teams
From January 1 - June 30, 2024, in response to in-app reports of violations of our Community Guidelines, Snap’s Trust & Safety teams took a total of 6,223,618 enforcement actions globally, including enforcements against 3,842,507 unique accounts. The median turnaround time for our Trust & Safety teams to take enforcement action in response to those reports was ~24 minutes. A breakdown per reporting category is provided below.
Analysis
Our overall reporting volumes remained fairly stable in H1 2024, as compared to the previous six months. This cycle, we saw an increase in total enforcements and total unique accounts enforced by approximately 16%.
Over the last 12 months, Snap introduced new reporting mechanisms for users, which account for changes to our reported and enforced volumes and for increases in turnaround times in this reporting period (H1 2024). Specifically:
Group Chat Reporting: We introduced Group Chat Reporting on October 13, 2023, which enables users to report abuse occurring in a multi-person chat. This change impacted the makeup of our metrics across reporting categories (because some potential harms are more likely to occur in a chat context) and increased report actionability.
Account Reporting Enhancements: We also evolved our Account Reporting feature to give reporting users an option to submit chat evidence when reporting an account suspected of being operated by a bad actor. This change, which provides us with greater evidence and context to assess account reports, launched on February 29, 2024.
Chat Reports, and especially Group Chat Reports, are among the most complex and time-consuming to review, which drove up turnaround times across the board.
Reporting for suspected Child Sexual Exploitation & Abuse (CSEA), Harassment & Bullying, and Hate Speech were particularly impacted by the two changes described above, and by shifts in the broader ecosystem. Specifically:
CSEA: We observed an increase in CSEA-related reports and enforcements in H1 2024. Specifically, we saw a 64% increase in total in-app reports by users, an 82% increase in total enforcements, and a 108% increase in total unique accounts enforced. These increases are largely driven by the introduction of Group Chat and Account Reporting functionalities. Given the sensitive nature of this moderation queue, a select team of highly trained agents is assigned to review reports of potential CSEA-related violations. The influx of additional reports combined with our teams adapting to new trainings has resulted in an increase in turnaround times. Moving forward, we have increased the size of our global vendor teams significantly to reduce turnaround times and accurately enforce on reports of potential CSEA. We expect that our H2 2024 Transparency Report will reflect the fruits of this effort with a materially improved turnaround time.
Harassment & Bullying: Based on reports, we have observed that Harassment & Bullying disproportionately occurs in chats, and particularly group chats. The improvements we introduced to Group Chat Reporting and Account Reporting help us take more comprehensive action when assessing reports in this reporting category. Additionally, as of this period, we require users to input a comment when submitting a harassment and bullying report. We review this comment to contextualize each report. Together, these changes led to material increases in total enforcements (+91%), total unique accounts enforced (+82%), and turnaround time (+245 mins) for corresponding reports.
Hate Speech: In H1 2024, we observed increases in reported content, total enforcements, and turnaround time for Hate Speech. Specifically, we saw a 61% increase in in-app reports, a 127% increase in total enforcements, and a 125% increase in total unique accounts enforced. This was due, in part, to improvements in our chat reporting mechanisms (as previously discussed), and was further exacerbated by the geopolitical environment, particularly the continuation of the Israel-Hamas conflict.
This reporting period, we saw a ~65% decrease in total enforcements and a ~60% decrease in total unique accounts enforced in response to reports of suspected Spam & Abuse, reflecting improvements in our proactive detection and enforcement tools. We saw similar declines in total enforcements in response to reports of content relating to Self Harm and Suicide (~80% decreases), reflecting our updated victim-centric approach, according to which our Trust & Safety teams will, in appropriate cases, send relevant users self-help resources, rather than take an enforcement action against those users. This approach was informed by members of our Safety Advisory Board, including a pediatric professor and medical doctor who specializes in interactive media and internet disorders.
Our Efforts to Proactively Detect and Enforce Against Violations of Our Community Guidelines
We deploy automated tools to proactively detect and, in some cases, enforce against violations of our Community Guidelines. These tools include hash-matching tools (including PhotoDNA and Google Child Sexual Abuse Imagery (CSAI) Match), abusive language detection tools (which detect and enforce based on an identified and regularly updated list of abusive keywords and emojis), and multi-modal artificial intelligence / machine learning technology.
In H1 2024, we took the following enforcement actions after proactively detecting, through the use of automated tools, violations of our Community Guidelines:
Sexual exploitation of any member of our community, especially minors, is illegal, abhorrent, and prohibited by our Community Guidelines. Preventing, detecting, and eradicating Child Sexual Exploitation and Abuse (CSEA) on our platform is a top priority for Snap, and we continually evolve our capabilities to combat these and other crimes.
We use active technology detection tools, such as PhotoDNA robust hash-matching and Google’s Child Sexual Abuse Imagery (CSAI) Match to identify known illegal images and videos of CSEA, respectively. In addition, in some cases, we use behavioral signals to enforce against other potentially illegal CSEA activity. We report CSEA-related content to the U.S. National Center for Missing and Exploited Children (NCMEC), as required by law. NCMEC then, in turn, coordinates with domestic or international law enforcement, as required.
In the first half of 2024, we took the following actions upon detecting CSEA on Snapchat (either proactively or upon receiving a report):
*Note that each submission to NCMEC can contain multiple pieces of content. The total individual pieces of media submitted to NCMEC is equal to our total content enforced.
Our Efforts to Provide Resources and Support to Snapchatters in Need
We care deeply about the mental health and well-being of Snapchatters, which continues to inform our decisions to build Snapchat differently. As a platform designed for communications between and among real friends, we believe Snapchat can play a unique role in empowering friends to help each other in difficult times. This is why we have developed resources and support for Snapchatters in need.
Our Here For You search tool shows resources from expert local partners when users search for certain topics related to mental health, anxiety, depression, stress, suicidal thoughts, grief and bullying. We have also developed a page dedicated to financial sextortion and other sexual risks and harms, in an effort to support those in distress. Our global list of safety resources is publicly available to all Snapchatters, in our Privacy, Safety & Policy Hub.
When our Trust & Safety teams become aware of a Snapchatter in distress, they can forward self-harm prevention and support resources, and notify emergency response personnel when appropriate. The resources that we share are available on our global list of safety resources, and are publicly available to all Snapchatters.
Appeals
Below we provide information about appeals we received from users requesting a review of our decision to lock their account:
* As discussed in the “Analysis” section above, stopping the spread of content or activity related to child sexual exploitation is a top priority. Snap devotes significant resources toward this goal and has zero tolerance for such conduct. We have expanded our global vendor teams to adapt to new policies and reporting features for Snapchat. In doing so, between H2 2023 and H1 2024, we reduced the turnaround time for CSEA appeals from 152 days to 15 days. We continually strive to improve our processes, including in relation to appeal turnaround times.
地區與國家概覽述
本節為我們的信任與安全團隊在地區中為執行社群規範而採取的行動的概述,包括主動因應應用程式內違規檢舉。 我們的《社群規範》適用於 Snapchat 上的所有內容以及所有 Snapchatter,不分地點,全球通用。
個別國家 / 地區(包括所有歐盟成員國)的資訊,可透過所附的 CSV 檔案下載。
Ads Moderation
Snap is committed to ensuring that all ads are fully compliant with our advertising policies. We believe in a responsible and respectful approach to advertising, creating a safe and enjoyable experience for all of our users. All ads are subject to our review and approval. In addition, we reserve the right to remove ads, including in response to user feedback, which we take seriously.
Below we have included insight into our moderation for paid advertisements that are reported to us following their publication on Snapchat. Note that ads on Snapchat can be removed for a variety of reasons as outlined in Snap’s Advertising Policies, including deceptive content, adult content, violent or disturbing content, hate speech, and intellectual property infringement. Additionally, you can now find Snapchat’s Ads Gallery in the navigation bar of this transparency report.

























