Preventing online abuse against women requires more than a “block” button
No news here — the internet can be a scary place for women. From mansplaining and unwanted advances to coordinated harassment campaigns, doxxing and physical threats.
Nina Jankowicz is author of a new book about this, “How to Be a Woman Online: Surviving Abuse and Harassment, and How to Fight Back,” which comes out Thursday. She wrote it after personally experiencing abuse online. Jankowicz is also an expert on disinformation and a global fellow at the Woodrow Wilson International Center for Scholars’ Kennan Institute.
She says women are bearing most of the burden of fighting online trolls. The following is an edited transcript of our conversation.
Nina Jankowicz: We need social media platforms to take some of this onus off the targets of abuse because right now, it’s basically on us to manage this. We are the ones that need to report the harassment to platforms and then be at their will to decide if they’re going to do anything about it. These are multibillion-dollar platforms that have now become such an intrinsic part of our lives that I think they owe it to 50% of the world’s population to try to keep us a little bit safer online. So I’d like to see platforms taking more of an active role in trying to root out this abuse, particularly the networked abuse. We see organized campaigns against women, especially high-profile women, that are organized entirely on these platforms. And [the platforms] can see from the back end, so I’d love to see a little bit more of that.
And then in general, I think we need to talk a lot more about building platforms from the very start with women and marginalized communities in mind. Often we hear about new features on a platform that seem really cool for your average white dude. But, when you think about it from the context of a woman or someone with an intersectional identity, it’s very clear that those new affordances on those platforms can be used for harassment or worse, and making sure they’re thinking through the impact on the most vulnerable communities online, I think is incredibly important.
Meghan McCarty Carino: When it comes to social media platforms, are there particular ones or particular tools that have been more successful with reporting and removing abuse?
Jankowicz: Well, we’ve seen a lot of really encouraging signs from Twitter in particular lately. They have moved to a human-centered reporting approach that I think they’re testing out, and they’re also testing removing yourself from a conversation so that if you’ve been tagged, you can untag yourself. If it’s getting too hurtful, you can make sure that you’re removed and you’re not getting notifications for all of those mentions. But again, these are both putting the onus on the abused person rather than putting the onus on the platform. So I would hope to see soon platforms, again, taking more proactive steps to make their users feel safer.
McCarty Carino: And of course, not all tools that do exist are foolproof. How do some trolls avoid detection from algorithms?
Jankowicz: Yeah, this is one of the most interesting things that I’ve found both in my personal experience and in the research that I’ve done. So when I dealt with the first wave of really horrific abuse that I had ever experienced, I was taken aback by the way that men started to use what I then called malign creativity. Rather than writing out a word straight, a slur like the B-word, they would write “B!TCH,” so that the [artificial intelligence] that was searching for slurs like that wouldn’t pick it up, or they’ll put spaces between letters or use symbols. There’s also image-based abuse because often platforms aren’t tracking abuse via images. So they are getting really good at evading detection, and that means even when you update a list of muted keywords, for instance, you’re not necessarily going to be catching everything, and some of that abuse is going to make its way through to you.
McCarty Carino: For many women, their jobs rely on having some sort of online presence and building a public persona. What more can employers do to respond when things get out of hand?
Jankowicz: Well, I think we’ve seen certain newsrooms start to think about this for their journalists because journalists obviously have an extremely public-facing profile online and use the internet both to collect and source stories, but to amplify their stories as well. But we haven’t really seen other sectors picking up the slack, especially with academic sectors [and] health care, which has been in the spotlight during the COVID-19 pandemic. So I think it’s really important that individuals start to advocate with their, with their managers with human resources and say, “Here I am doing this public engagement. Here are some ways that you can support me.” And in the book, I go through a bunch of them. One of them is if I get doxxed and my home address is leaked, will you put me up in a hotel for a couple of weeks until it dies down? Is it covered for me to go to therapy if I’m experiencing this sort of abuse and harassment? A lot of employers have pretty detailed policies about what you can and can’t say on social media, but they’re not thinking through the consequences of, of what is said to you and often as a result of your engagement for that employer online.
McCarty Carino: Of course, we’ve been hearing a lot about Elon Musk’s interest in buying Twitter. It’s part of what he calls his “crusade against censorship” on the platform. But what happens when powerful people frame this discussion in terms of free speech?
Jankowicz: Abuse doesn’t really fall under the free speech spectrum because ultimately what it seeks to do is to silence its targets. And so, by allowing abuse to continue, yes, you might be giving speech to some who are using it to abuse people, but what you’re doing is pushing others out of the public discourse. And so I think we need to think very, very carefully about what “speech” means and for whom we are talking about this speech being accessible. Because if we moved toward more of a free speech absolutist paradigm on the internet, that would mean less speech for women and less speech for marginalized communities.
Related links: More insight from Meghan McCarty Carino
Nina Jankowicz’s website, cleverly called Wiczipedia, features more information on her new book, which includes a chapter on trolls.
And you don’t have to look too hard in the news for evidence of these disturbing trends. A recent report from the U.K. Center for Countering Digital Hate analyzed the Instagram direct messages of five high-profile women in media or activism. The group found that of the more than 8,500 messages, 1 of every 15 broke Instagram’s own rules against harassment and abuse. The messages included pornographic images and threats of physical violence, according to The New York Times.
In a statement to The Times, Instagram, which is part of Meta, disputed the conclusions and pointed to features meant to limit harassment, such as settings that allow users to control who can message them or filter out messages with certain words.
Some recent reporting from Vice shows what can happen when tech companies fail to anticipate what Jankowicz calls malign creativity. Vice went through records from some of the biggest police departments in the U.S. looking at complaints about Apple AirTags. Those are coin-size tracking devices that can be used to keep tabs on keys or wallets. But Vice found dozens of instances in which they were allegedly used to surveil, stalk and harass women.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.