Twitter hires social scientists to help figure out our conversation problem
Earlier this week, it emerged that Disney considered buying Twitter in 2017, but CEO Bob Iger said it was “too toxic” for the family-friendly brand. Twitter CEO Jack Dorsey often says that Twitter needs to think more about how to deal with harassment and hate speech on the platform.
With that in mind, the company has commissioned a two-year study to help it create metrics for what is a healthy conversation and what isn’t. I spoke with Rebekah Tromble, who teaches media and politics at George Washington University and is one of the research leads on this project. She said the team is looking at four categories: Mutual engagement, diversity of perspective, incivility and intolerance. And so far, the findings aren’t always what you’d expect. The following is an edited transcript of our conversation.
Rebekah Tromble: We find that increased engagement with diversity of perspectives, for some people, can lead them to essentially get fired up about the views that they already hold. It might entrench their views more deeply to be exposed to a broader range of perspectives, and particularly those with which they disagree. That may be good in the sense that it mobilizes people to participate in the political system. But it could be bad in the sense that it increases polarization.
Molly Wood: Talk to me about how instability and intolerance play into that, because it sounds like it’s probably more complicated than we think.
Tromble: That’s right. My colleague Patricia Rossini’s work has been foundational. What she suggests is that incivility isn’t inherently bad for all users, and it isn’t inherently bad for democracy itself. It could be that things like swearing or — even to some degree — name calling of others, winds up helping people’s voices break through. On the other hand, intolerance does break core democratic norms in the sense that it targets users based on their protected characteristics, such as race and gender.
Wood: Once you have all of this and you’ve done all this research, what might Twitter do with these metrics and these findings?
Tromble: There are several things, and one thing that I want to be careful to clarify here: The work that we’re doing won’t allow Twitter to identify individual tweets on the platform to flag them or take them down. Instead, what we’re providing is a broad assessment, a true measure over a larger conversation on a topic of the extent to which these different phenomena appear within that conversation. These broad measures, I think, will ultimately help platforms like Twitter to better understand where and when it’s most likely that the unhealthy dynamics might emerge. The information that we’d be providing would allow them to look more closely in potentially problematic spaces as they arise.
Wood: Tell me more about what you think that looks like? Is Twitter interested in guiding conversations, creating tools that pop up a little Clippy from Microsoft Office device, like, “I see you might be headed for an uncivil conversation”?
Tromble: Honestly, all those options are possibilities. And because we’re external academic experts on this, we’re not actually privy to the sorts of conversations, the sorts of thinking that Twitter has along these lines. Unfortunately, I can’t tell you too much about what Twitter’s thinking in terms of how they might then apply these metrics and their broader aims here.
Wood: One thing, though, that I can’t help but think is that there’s been this controversy, Twitter and Facebook have been criticized, so far without proof, for suppressing conservative voices on the platform. Are you concerned at all about how your research might be used in that conversation?
“We are quite concerned about how our results, which will be political in nature … will be politicized.”
Rebekah Tromble
Tromble: There’s certainly no doubt that we’re stepping into a politically charged environment. Anytime there’s research being released, information being released, about the platform’s performance, about any new policies that they’re implementing, there’s a rush to take advantage of that in the political arena. We are quite concerned about how our results — which will be political in nature, they will be about political topics — we’re concerned about how those will be politicized. We remain committed to sharing each and every one of our findings publicly, because we think that while some of that criticism we will undoubtedly face will be problematic and unfair, we still want to be able to make sure that those who are viewing our results with an honest and open mind have the opportunity to point out where we might have made mistakes, where we could do things better, but ultimately, that everyone can actually learn from our results.
Wood: What about bots? Is it harder to accurately measure the health of conversations if a lot of the people who are participating are not actually people?
Tromble: I don’t think it’s harder to measure the health of conversations because we’re interested in the overall ecosystem. We’re interested in what even the bots are putting out there. For other types of research, it might be more problematic if there’s all sorts of noise being introduced by the inauthentic behavior of bots. In our case, the bots contribute to the overall conversational environment. We need to understand when the bots themselves are creating part of the problem.
Wood: What’s the biggest challenge you have ahead of you with this work?
Tromble: I think that there are two core challenges. One is simply the technical challenge that we’ve set for ourselves is quite high, developing the metrics themselves, making sure that they are robust, will be hard work. It’s complicated to automatically detect and label things like instability and intolerance. That, of course, will be a challenge. And the second one is a challenge that you touched on already. That’s simply, as we try to communicate our findings with the broader public, making sure that we, of course, are being held accountable, but that we’re being held accountable fairly, that the findings that we have don’t contribute any more than necessary to an already polarized and what can be toxic political environment.
Related links: More insight from Molly Wood
After these many years of thinking about possible solutions to the toxic conversation problem, Twitter did finally this week roll out an actual change, at least in the United States and Japan. It’s called “hide replies.” You can now hide replies on a thread that you started. They won’t be deleted, and anyone can still view them with an extra click, but it can at least reduce the influence of people or bots who are posting unhelpful, abusive or bullying responses.
Honestly, this is a very basic step, although some critics say it could be used to silence reasonable disagreement on public posts. Since it’s all still there for the viewing, I think it’s an acceptable trade-off.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.