Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
The best way to counter hate speech online might be to have a bot call it out
Oct 8, 2019

The best way to counter hate speech online might be to have a bot call it out

HTML EMBED:
COPY
Intel and UC Santa Barbara are working on an AI model that detects and responds to hate speech.

We have ample evidence that hate speech online can easily spread offline. Artificial intelligence can help spot racist, violent or sexist speech. But then what? Human moderators must read horrible things to verify if they’re hate speech, taking down posts creates free speech complaints, and it’s also trivially easy to just make a new account to spew the same terrible things.

Researchers at Intel and University of California, Santa Barbara, are proposing a new idea to use AI to identify hate speech and then create an automated response to those messages, like pointing out that the words used could be offensive or warning people that they are violating terms of service.

I spoke with Anna Bethke, head of AI for social good at Intel. She walked me through how their AI model works. The following is an edited transcript of our conversation.

Anna Bethke: A lot of these mechanisms to remove our hate speech online aren’t really working. This is using artificial intelligence to be able to insert a comment that would mitigate a hate speech rant, or this very easily escalate-able situation. It’s taking in the information of the post that it’s responding to, and the entire thread as well, so it knows what the context is.

Molly Wood: In your research, is there evidence that that intervention works? What happens after an auto-responder is inserted like that?

Bethke: That’s something that we really need to test out further. This was step A and B, perhaps, of being able to identify the hate speech, identify a method to intervene. The next step is working in these different communities, working with a group like Reddit, or we use Gab as the other data set, with them or Twitter, and run these different tests of having no intervention responses, intervention responses by a person and then by an algorithm.

Wood: What are the long-term goals? Do you hope that this technology can reduce some of the burden on human moderators?

Human moderators are carrying a very heavy burden, and it is a harmful job for them from a mental health perspective because they’re just looking at hate message after hate message after hate message.

Anna Bethke

Bethke: That would be spectacular. Human moderators are carrying a very heavy burden, and it is a harmful job for them from a mental health perspective because they’re just looking at hate message after hate message after hate message. I know myself, it was hard for me even just to look at this data set and to work with it, and it would be a job that I know I would not be able to perform. I don’t think it’s fair for us to put such a burden on people. I know that there are ways that all these platforms are helping their content moderators, but still, no one should be having to read these.

Related links: More insight from Molly Wood

This type of technology is suddenly even more relevant to Facebook. Judges in the European Union ruled last week that European regulators can force Facebook to take down hate speech posts anywhere in the world. Facebook and some human rights groups said that ruling essentially lets one country or region decide what speech people all over the world can see. But governments and civil rights groups have complained that platforms like Google, Twitter and Facebook haven’t done enough to combat violent, extremist or other hateful speech online, which is arguably also true.

These platforms don’t want to be able to have to censor speech. That’s obviously a huge, hairy proposition. I would argue that we don’t want the CEOs of private companies to have the power to either promote or block, for example, the views of the president of the United States. But it might be the only way for them to stay in business.

A former public policy adviser to Facebook told ABC News last month that it’s in Facebook’s long-term economic interest to do a better job monitoring and discouraging hateful speech, otherwise it’s going to face more and more regulation like the EU rules that were handed down last week. Facebook rolled out new tools as recently as last month.

There’s also a piece from the New York Times over the weekend provocatively titled “Free Speech Is Killing Us.”

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Thanks to our sponsors