Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
Found the internet!

News from reddit about site security

r/redditsecurity

129
Posted by11 days ago

Hi reddit peoples!

You may remember me from a few weeks ago when I gave an update on user blocking. Thank you to everyone who gave feedback about what is and isn’t working about blocking. The stories and examples many of you shared helped identify a few ways blocking should be improved. Today, based on your feedback, we’re happy to share three new updates to blocking. Let’s get to it…

Update #1: Preventing people from using blocking to shut down conversations

In January, we changed the tool so that when you block someone, they can’t see or respond to any of your comment threads. We designed blocking to prevent harassment, but we see that we have also opened up a way for users to shut down conversations.

Today we’re shipping a change so that users aren’t locked out of an entire comment thread when a user blocks them, and can reply to some embedded replies (i.e., the replies to your replies). We want to find the right balance between protecting redditors from being harassed while keeping conversations open. We’ll be testing a range of values, from the 2nd to 15th-level reply, for how far a thread continues before a blocked user can participate. We’ll be monitoring how this change affects conversations as we determine how far to turn this ‘knob’ and exploring other possible approaches. Thank you for helping us get this right.

Update #2: Fixing bugs

We have fixed two notable bugs:

  1. When you block someone in the same thread as you, your comments are now always visible in your profile.

  2. Blocking on old Reddit works the same way as it does on the rest of the platform now. We fixed an issue on old Reddit that was causing the block experience to sometimes revert back to the old version, and other times it would be a mix of the new and the old experience.

If you see any bugs, please keep reporting them! Your feedback helps keep reddit a great place for everyone to share, discuss, and debate — (What kind of world would we live in if we couldn’t debate the worst concert to go to if band names were literal?)

Update #3: People want more controls over their experience

129
83 comments
165
Posted by2 months ago
Helpful2Silver

Hello people folks of Reddit,

Earlier this year we made some updates to our blocking feature. The purpose of these changes is to better protect users who experience harassment. We believe in the good — that the overwhelming majority of users are not trying to be jerks. Blocking is a tool for when someone needs extra protection.

The old version of blocking did not allow users to see posts or comments from blocked users, which often left the user unaware that they were being harassed. This was a big gap, and we saw users frequently cite this as a problem in r/help and similar communities. Our recent updates were aimed at solving this problem and giving users a better way to protect themselves. ICYMI, my posts in December and January cover in more detail the before and after experiences. You can also find more information about blocking in our Help Centers here and here.

We know that the rollout of these changes could have been smoother. We tried our best to provide a seamless transition by communicating early and often with mods via Mod Council posts and calls. When it came time to launch the experience, we ran into scalability issues that hindered our ability to rollout the update to the entire site, meaning that the rollout was not consistent across all users.

This issue meant that some users temporarily experienced inconsistency with:

  • Viewing profiles of blocked users between Web and Mobile platforms

  • How to reply to users who have blocked you

  • Viewing users who have blocked you in community and home feeds

As we worked to resolve these issues, new bugs would pop up that took us time to find, recreate, and resolve. We understand how frustrating this was for you, and we made the blocking feature our top priority during this time. We had multiple teams contribute to making it more scalable, and bug reports were investigated thoroughly as soon as they came in.

Since mid-June, the feature is fully functional on all platforms. We want to acknowledge and apologize for the bugs that made this update more difficult to manage and use. We understand that this created an inconsistent and confusing experience, and we have held multiple reviews to learn from our mistakes on how to scale these types of features better next time.

While we were making the feature more durable, we noticed multiple community concerns about blocking abuse. We heard this concern before we launched, and added additional protections to limit suspicious blocking behavior as well as monitoring metrics that would alert us if the suspicious behavior was happening at scale. That said, it concerned us that there was continued reference to this abuse, and so we completed an investigation on the severity and scale of block abuse.

165
275 comments
141
Posted by3 months ago

Hey-o and a big hello from SF where some of our resident security nerds just got back from attending the annual cybersecurity event known as RSA. Given the congregation of so many like-minded, cyber-focused folks, we’ve been thinking a lot about the role of Reddit not just in providing community and belonging to everyone in the world, but also about how Reddit interacts with the broader internet ecosystem.

Ain’t no party like a breached third party

In last quarter’s report we talked about the metric “Third Party Breach Accounts Processed”, because it was jumping around a bit, but this quarter we wanted to dig in again and clarify what that number represents.

First-off, when we’re talking about third-party breaches, we’re talking about other websites or apps (i.e., not Reddit) that have had a breach where data was leaked or stolen. When the leaked/stolen data includes usernames and passwords (or email addresses that include your username, like worstnerd@reddit.com), bad actors will often try to log-in using those credentials at all kinds of sites across the internet, including Reddit -- not just on the site/app that got hacked. Why would an attacker bother to try a username and password on a random site? The answer is that since many people reuse their passwords from one site to the next, with a big file of passwords and enough websites, an attacker might just get lucky. And since most login “usernames” these days are an email address, it makes it even easier to find when a person is reusing their password.

Each username and password pair in this leaked/stolen data is what we describe as a “third-party breach account”. The number of “third-party breach accounts” can get pretty large because a single username/email address could show up in breaches at multiple websites, and we process every single one of those instances. “Processing” the breach account means we (1) check if the breached username is associated with a Reddit account and (2) whether that breached password, when hashed, matches the Reddit account’s current hashed password. (TL;DR: a “hashed” password means the password has been permanently turned into a scrambled version of itself, so nobody ever sees or has access to your password.) If the answer to both questions is yes, we let that Reddit user know it’s time to change their password! And we recommend they add some 2FA on top to double-plus protect that account from attackers.

There are a LOT of these stolen credential files floating around the internet. For a while security teams and specialized firms used to hunt around the dark web looking for files and pieces of files to do courtesy checks and keep people safe. Now, anyone is able to run checks on whether they’ve had their information leaked by using resources like Have I Been Pwned (HIBP). It’s pretty cool to see this type of ecosystem innovation, as well as how it’s been adopted into consumer tech like password managers and browsers.

Wrapping it up on this particular metric, last quarter we were agog to see “3rd party breach accounts processed” jump up to ~1.4B breach accounts, and this quarter we are relieved to see that has come back down to a (still whopping) ~314M breach accounts. This means that in Q1 2022 we received 314M username/password combos from breaches at other websites. Some subset of those accounts might be associated with people who use Reddit, and then a smaller subset of those accounts may have reused their breached passwords here. Specifically, we took protective action on 878,730 Reddit accounts this quarter, which means that many of you got a message from us to please change your passwords.

How we think about emerging threats (on and off of Reddit)

Just like we take a look at what’s going on in the dark web and across the ecosystem to identify vulnerable Reddit accounts, we also look across the internet to spot other trends or activities that shed light on potential threats to the safety or security of our platform. We don’t just want to react to what shows up on our doorstep, we get proactive when we can by trying to predict how events happening elsewhere might affect Reddit. Examples include analyzing the internet ecosystem at large to understand trends and problems elsewhere, as well as analyzing our own Reddit telemetry for clues that might help us understand how and where those activities could show up on our platform. And while y’all know from previous quarterly reports we LOVE digging into our data to help shed light on trends we’re seeing, sometimes our work includes really simple things like keeping an eye on the news. Because as things happen in the “real world” they also unfold in interesting ways on the internet and on Reddit. Sometimes it seems like our ecosystem is the web, but we often find that our ecosystem is the world.

Our quarterly reports talk about both safety AND security issues (it’s in the title of the report, lol), but it’s pretty fluid sometimes as to which issues or threats are “safety” related, and which are “security” related. We don’t get too spun-up about the overlap as we’re all just focused on how to protect the platform, our communities, and all the people who are participating in the conversations here on Reddit. So when we’re looking across the ecosystem for threats, we’re expansive in our thinking -- keeping eyes open looking for spammers and scammers, vulns and malware, groups organizing influence campaigns and also groups organizing denial of service attacks. And once we understand what kind of threats are coming our way, we take action to protect and defend Reddit.

141
37 comments
539
Posted by6 months ago
HelpfulWholesomeAll-Seeing Upvote

For several years now, we have been steadily scaling up our safety enforcement mechanisms. In the early phases, this involved addressing reports across the platform more quickly as well as investments in our Safety teams, tooling, machine learning, etc. – the “rising tide raises all boats” approach to platform safety. This approach has helped us to increase our content reviewed by around 4x and accounts actioned by more than 3x since the beginning of 2020. However, in addition to this, we know that abuse is not just a problem of “averages.” There are particular communities that face an outsized burden of dealing with other abusive users, and some members, due to their activity on the platform, face unique challenges that are not reflected in “the average” user experience. This is why, over the last couple of years, we have been focused on doing more to understand and address the particular challenges faced by certain groups of users on the platform. This started with our first Prevalence of Hate study, and then later our Prevalence of Holocaust Denialism study. We would like to share the results of our recent work to understand the prevalence of hate directed at women.

The key goals of this work were to:

  1. Understand the frequency at which hateful content is directed at users perceived as being women (including trans women)

  2. Understand how other Redditors respond to this content

  3. Understand how Redditors respond differently to users perceived as being women (including trans women)

  4. Understand how Reddit admins respond to this content

First, we need to define what we mean by “hateful content directed at women” in this context. For the purposes of this study, we focused on content that included commonly used misogynistic slurs (I’ll leave this to the reader’s imagination and will avoid providing a list), as well as content that is reported or actioned as hateful along with some indicator that it was directed at women (such as the usage of “she,” “her,” etc in the content). As I’ve mentioned in the past, humans are weirdly creative about how they are mean to each other. While our list was likely not exhaustive, and may have surfaced potentially non-abusive content as well (e.g., movie quotes, reclaimed language, repeating other users, etc), we do think it provides a representative sample of this kind of content across the platform.

We specifically wanted to look at how this hateful content is impacting women-oriented communities, and users perceived as being women. We used a manually curated list of over 300 subreddits that were women-focused (trans-inclusive). In some cases, Redditors self-identify their gender (“...as I woman I am…”), but one the most consistent ways to learn something about a user is to look at the subreddits in which they participate.

For the purposes of this work, we will define a user perceived as being a woman as an account that is a member of at least two women-oriented subreddits and has overall positive karma in women-oriented subreddits. This makes no claim of the account holder’s actual gender, but rather attempts to replicate how a bad actor may assume a user’s gender.

With those definitions, we find that in both women-oriented and non-women-oriented communities, approximately 0.3% of content is identified as being hateful content directed at women. However, while the rate of hateful content is approximately the same, the response is not! In women-oriented communities, this hateful content is nearly TWICE as likely to be negatively received (reported, downvoted, etc.) than in non-women-oriented communities (see chart). This tells us that in women-oriented communities, users and mods are much more likely to downvote and challenge this kind of hateful content.

Title: Community response (hateful content vs non-hateful content)

||Women-oriented communities|Non-women-oriented communities|Ratio|

539
272 comments
192
Posted by6 months ago
Archived

Hi Community!

We’d like to announce an update to the way that we’ll be tagging NSFW posts going forward. Beginning next week, we will be automatically detecting and tagging Reddit posts that contain sexually explicit imagery as NSFW.

To do this, we’ll be using automated tools to detect and tag sexually explicit images. When a user uploads media to Reddit, these tools will automatically analyze the media; if the tools detect that there’s a high likelihood the media is sexually explicit, it will be tagged accordingly when posted. We’ve gone through several rounds of testing and analysis to ensure that our tagging is accurate with two primary goals in mind: 1. protecting users from unintentional experiences; 2. minimizing the incidence of incorrect tagging.

Historically, our tagging of NSFW posts was driven by our community moderators. While this system has largely been effective and we have a lot of trust in our Redditors, mistakes can happen, and we have seen NSFW posts mislabeled and uploaded to SFW communities. Under the old system, when mistakes occurred, mods would have to manually tag posts and escalate requests to admins after the content was reported. Our goal with today’s announcement is to relieve mods and admins of this burden, and ensure that NSFW content is detected and tagged as quickly as possible to avoid any unintentional experiences.

While this new capability marks an exciting milestone, we realize that our work is far from done. We’ll continue to iterate on our sexually explicit tagging with ongoing quality assurance efforts and other improvements. Going forward, we also plan to expand our NSFW tagging to new content types (e.g. video, gifs, etc.) as well as categories (e.g. violent content, mature content, etc.).

While we have a high degree of confidence in the accuracy of our tagging, we know that it won’t be perfect. If you feel that your content has been incorrectly marked as NSFW, you’ll still be able to rely on existing tools and channels to ensure that your content is properly tagged. We hope that this change leads to fewer unintentional experiences on the platform, and overall, a more predictable (i.e. enjoyable) time on Reddit. As always, please don’t hesitate to reach out with any questions or feedback in the comments below. Thank you!

192
144 comments
350
Posted by7 months ago
Archived
Silver2

Hi all,

We want to let you know that we are making some changes to our platform-wide rule 3 on involuntary pornography. We’re making these changes to provide a clearer sense of the content this rule prohibits as well as how we’re thinking about enforcement.

Specifically, we are changing the term “involuntary pornography” to “non-consensual intimate media” because this term better captures the range of abusive content and behavior we’re trying to enforce against. We are also making edits and additions to the policy detail page to provide examples and clarify the boundaries when sharing intimate or sexually explicit imagery on Reddit. We have also linked relevant resources directly within the policy to make it easier for people to get support if they have been affected by non-consensual intimate media sharing.

This is a serious issue. We want to ensure we are appropriately evolving our enforcement to meet new forms of bad content and behavior trends, as well as reflect feedback we have received from mods and users. Today’s changes are aimed at reducing ambiguity and providing clearer guardrails for everyone—mods, users, and admins—to identify, report, and take action against violating content. We hope this will lead to better understanding, reporting, and enforcement of Rule 3 across the platform.

We’ll stick around for a bit to answer your questions.

[EDIT: Going offline now, thank you for your questions and feedback. We’ll check on this again later.]

350
88 comments
201
Posted by7 months ago
Archived

Hey y’all, welcome to February and your Q4 2021 Safety & Security Report. I’m /u/UndrgrndCartographer, Reddit’s CISO & VP of Trust, just popping my head up from my subterranean lair (kinda like Punxsutawney Phil) to celebrate the ending of winter…and the publication of our annual Transparency Report. And since the Transparency Report drills into many of the topics we typically discuss in the quarterly safety & security report, we’ll provide some highlights from the TR, and then a quick read of the quarterly numbers as well as some trends we’re seeing with regard to account security.

2021 Transparency Report

As you may know, we publish these annual reports to provide deeper clarity around our content moderation practices and legal compliance actions. It offers a comprehensive and quantitative look at what we also discuss and share in our quarterly safety reports.

In this year’s report, we offer even more insight into how we handle illegal or unwelcome content as well as content manipulation (such as spam, artificial content promotion), how we identify potentially violating content, and what we do with bad actors on the site (i.e., account sanctions). Here’s a few notable figures from the report, below:

Content Removals

  • In 2021, admins removed 108,626,408 pieces of content in total (27% increase YoY), the vast majority of that for spam and content manipulation (e.g., vote manipulation, “brigading”). This is accompanied by a ~14% growth in posts, comments, and PMs on the platform, and doesn’t include legal / copyright removals, which we track separately.

  • For content policy violations:

    • Not including spam and content manipulation, we removed 8,906,318 pieces of content.

Legal Removals

  • We received 292 requests from law enforcement or government agencies to remove content, a 15% increase from 2020. We complied in whole or part with 73% of these requests.

Requests for User Information

201
67 comments

About Community

/r/redditsecurity is a running log of actions taken to ensure the safety and security of reddit.com
33.1k

Members

35

Online


Created Jan 29, 2019
Restricted

Info

/r/redditsecurity is a running log of actions taken to ensure the safety and security of reddit.com

See /r/blog and /r/announcements for other important news about the site

To report content policy violations to the admins, please use this form

If you think you've located a significant security concern, send it to security@reddit.com

Check out /r/ModSupport for information related to subreddit moderation

This is an admin-sponsored subreddit.

Moderators

Moderator list hidden. Learn More