Make a difference in our non-profit newsroom... and help Marketplace meet our year-end goal! Donate Today 💙
What would it take to moderate a platform as big as Facebook?
Jul 10, 2020

What would it take to moderate a platform as big as Facebook?

HTML EMBED:
COPY
Civil rights experts say Facebook doesn't enforce its policies against hate speech consistently.

Facebook released the independent civil rights audit of its platform on Wednesday. The auditors said, among many other things, that “Facebook unevenly enforces or fails to enforce its own policies against prohibited content. Thus, harmful content is left on the platform for too long.”

Facebook has said that it doesn’t want to be an arbiter of truth and that it prioritizes free speech over all. But also that it’s difficult, if not impossible, to actually moderate content on a platform as big as Facebook, Instagram and WhatsApp combined.  

So what is at the heart of this problem? I spoke with Spandi Singh, a policy analyst at New America’s Open Technology Institute. The following is an edited transcript of our conversation.

Spandi Singh (Photo courtesy of Singh)

Spandi Singh: I think it’s important to recognize that there does not necessarily need to be one gold standard system of moderation. In fact, it’s important to encourage platforms to moderate in diverse ways because that encourages freedom of expression online. But I think what is really important is for platforms to provide more transparency and accountability around how they are training these people, how accurate they are and what impact these individuals and their activities have on the online speech of users. I think when you’re talking about what platforms as a whole can do better, that is particularly one area where policymakers and civil societies and civil rights groups should be encouraging platforms to do more.

Molly Wood: Let’s talk about automated moderation. Is that a similar situation where we just don’t know that much about how these tools work and how good or bad they are?

Singh: Companies tend to tout automated tools as a silver bullet solution for content-moderation issues, but they do not provide adequate data around how accurate these tools are, how they are trained and how they are refined. This is why civil society groups and civil rights groups continue to press companies to always keep humans in the loop, because using these automated tools can have huge consequences for freedom of expression online. We have to be cautious about just telling companies to invest more in these tools. We need to know how effective these tools are before telling companies to just throw more money at them.

Wood: Do you think it’s believable that they can’t? I think that’s been an argument [in which] Facebook is like, there’s too many, there’s too much. It literally is a scale problem. We cannot do it.

Singh: The experience of the past few months has shown that platforms can definitely do more, and sometimes it’s just about how they assess their goals around removing harmful content and protecting freedom of expression. For example, during the COVID-19 pandemic, we’ve seen how platforms such as Facebook have really responded rapidly and proactively to the spread of mis- and disinformation. This has raised many questions about why platforms have not taken as proactive of an approach against other types of misleading information, such as election- and voter suppression-related disinformation. It’s just about when do they decide to take those moves, and what are the lever points that encourage them to do so?

Wood: Would an extra $100 million a year show marked improvement? $500 million, $1 billion? Is it just about money?

Singh: I don’t think it’s just about money. It’s really about what are you investing in? What are you using that money towards? When it comes to human content moderators, there is a lot that needs to be invested in, everything from the mental health of these moderators to what are they being trained in? Again, without that data, it’s really difficult to know where are you already spending that money? What result is that generating, and how does that improve? Where can you invest more to improve your efforts? As a member of the public, we really don’t have access to that data, so it’s difficult to just say, oh, invest like this many million more dollars and your problems will be solved, without really understanding what the scope of the problems are.

Wood: It seems like even if you are trying to give Facebook the benefit of the doubt, it’s hard not to arrive at the conclusion that this is a choice. And they’ve said it’s a choice, a choice around prioritizing speech. What do you think it might take in terms of public pressure or even advertiser pressure to push them to make a different choice?

Singh: I think, in terms of pressure, it is important to recognize that in the U.S., when you talk about policymaker pressure, the government is limited in the extent to which they can direct platforms to moderate content on their platforms. In terms of public pressure, I think with COVID-19, platforms really recognized that this content was having very real and tangible offline results and consequences, so they rallied and they responded effectively. I think with other areas, they’re a little more political, so they may shy away from that. I think as a member of the public, probably just continuously calling on companies to recognize that their platform has a consequence for people offline and trying to really push that. The advertiser approach is also a very interesting one, but I think this is the first time we’re seeing something like this, so we need to wait to see what the actual impacts of that are. It’s a bit early to decipher that right now.

Wood: In terms of transparency, what might that look like, and is it fair for Facebook to say, if we were more transparent about our tools, that we’d be giving away, potentially, business secrets?

Singh: I think there are different tiers of transparency that companies can engage in. If companies have concerns around competition and business secrets, they don’t need to necessarily release valuable data around their tools or their training processes to the public. They can release them to a group of vetted researchers. I definitely would encourage companies to explore these vetted transparency tiers. I think there are definitely solutions. Companies just need to think through them a little bit more.

(Chip Somodevilla/Getty Images)

Related links: More insight from Molly Wood

The Seattle Post-Intelligencer has a really nice roundup of five key takeaways from Facebook’s civil rights audit, including the ways that the platform enables voter suppression and organized hate, troubling exemptions for politicians, that uneven enforcement we mentioned and places where Facebook has made at least a little progress. Bloomberg said that on the topic of voter suppression, auditors were especially concerned about how Facebook handled posts by President Donald Trump that contained inaccurate information about mail-in voting but that Facebook said didn’t violate its policies around voter suppression.

On Tuesday, before the report came out, Chief Operating Officer Sheryl Sandberg said the company needed to “get better at finding and removing hateful content.” On Wednesday, Facebook found some. The platform removed dozens of accounts that it said were spreading coordinated hate speech or misinformation on Facebook and Instagram. The accounts belonged to the hate group Proud Boys, Trump ally Roger Stone and some employees of Brazilian President Jair Bolsonaro.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer