Safety is a fundamental human need. This is as true online as it is in your own hometown. When you feel safe, you can show up as your authentic self, forge deeper connections with people, and build community.
But safety depends on many things, including the space you’re in, the people you’re with, others around you, and everyone’s expectations of how to interact.
At Discord, we embrace this complexity and recognize that safety is everyone’s job. It starts with the technology we build and the expertise we rely on to manage it. That sets the stage for a culture of safety to take root among users, who play a key role in building norms, enforcing rules, and keeping each other accountable.
When we talk about protecting users, we’re talking about preventing them from anything that could cause harm. That includes things like harassment, hateful conduct, unwanted interactions from others, inappropriate contact, violent and abusive imagery, violent extremism, misinformation, spam, fraud, scams, and other illegal behavior.
It’s a broad set of experiences, but each of them can affect someone’s life negatively in real ways. They also degrade the overall experience of the community where they happen.
In addition to keeping users safe, we also need them to feel safe. That’s why we talk openly about our work, publish safety metrics each quarter, and give everyone access to resources in our Safety Center.
Discord’s commitment to safety is reflected in its staff and the way we build our products.
Over 15% of our staff works directly on the team that ensures our users have a safe experience. We invest time and resources here because keeping users safe is a core responsibility: It’s central to our mission to create the best place to hang out online and talk to friends.
Our safety work includes experts from all parts of the company, with a wide range of backgrounds. These experts are engineers, product managers, designers, legal experts, policy experts, and more. Some have worked in technology, but also social work, teen media, human rights, and international law. Building a diverse team like this helps ensure we get a 360-degree view on threats and risks, and the best ways to protect against them.
We also invest in proactively detecting and removing harmful content before it is viewed or experienced by others. During the fourth quarter of 2023, 94% of all the servers that were removed were removed proactively. We’ve built specialized teams, formed external partnerships with industry experts, and integrated advanced technology and machine learning to keep us at the cutting edge of providing a safe experience for users.
While rules and technical capabilities lay the foundation, users play a central part in making servers safe for themselves and others.
Our Community Guidelines clearly communicate what activities aren’t allowed on Discord. We warn, restrict, or even ban users and servers if they violate those rules. But we don’t want to spend our days chastising and punishing people, and we don’t want people to worry that they’re always at risk of being reported. That’s no way to have fun with friends.
Instead, we help users understand when they’ve done something wrong and nudge them to change their behavior. When people internalize the rules that way, they recognize when they, or someone else, might be breaking them. They discuss the rules organically, and in the process build a shared sense of what keeps their community safe and in good standing.
This has a multiplier effect on our work to build safer spaces: Users act with greater intention and they more proactively moderate themselves and their communities against unsafe actions. When communities have this shared commitment to the rules, they’re also more likely to report harmful activity that could put an otherwise healthy community at risk.
It’s sometimes said in technology that a platform can’t really reduce harm or block bad actors from doing bad things. Instead, those actions have to be addressed after they happen, when they’re reported or detected.
At Discord, we have loftier goals than that.
We actually want to reduce harm. We want to make it so hard for bad actors to use our platform that they don’t even try. For everyone else, we want to build products and provide guidance that make safety not just a technical accomplishment, but a cultural value throughout our company and the communities we support.