Social media slow to take down anti-Muslim content, new research suggests
Social media companies say they are working hard to prevent hate speech from being posted on their platforms, and remove it when it is. But that’s an ongoing challenge as they operate in numerous countries with many languages and social contexts.
A new report from the nonprofit Center for Countering Digital Hate reveals anti-Muslim hate speech and misinformation still proliferate online.
Imran Ahmed is the founder and CEO of the group. The following is an edited transcript of his conversation with Marketplace’s Kimberly Adams on CCDH’s latest research on the problem.
Imran Ahmed: We identified hundreds of bits of hatred on their platform. Now that in itself is problematic that we could find it so easily, but then we reported it to the platforms using their own reporting tools, so [by] clicking “Report dangerous post.” We went back a few weeks later to check what action did they take? What we found was really disturbing, that 9 out of 10 times, even when notified about the most egregious hatred — glorifying the terrorist at Christchurch [New Zealand], very, very dangerous conspiracy theories and extreme forms of hatred — 9 out of 10 times, they took no action whatsoever.
Kimberly Adams: And do you have any sense how that compares to the way that these platforms respond to other types of hate speech?
Ahmed: It is very comparable to the work that we’ve done studying antisemitism and misogyny. And in fact, it’s very similar numbers to those that we saw when we looked at COVID conspiracy theories and vaccine misinformation.
Adams: How have the platforms responded to your report so far?
Ahmed: To date, most of the platforms haven’t responded. Twitter said, we know that we can do better. YouTube appears to have told people that they have taken down a few of the videos, but they didn’t tell anyone whether they took them down after our report came out, which is what we believe happened. So at this point, whether it’s legislation around the world coming into place, whether that’s in the United Kingdom with the online safety bill, the European Union with the Digital Services Act, or a raft of bills that have been proposed in the U.S. Congress, I think we’re at the point now where it is time for them to take responsibility and show some accountability for the hatred that they allowed to proliferate on their platforms.
Adams: In Europe, the Digital Services Act is meant to hold social media companies more accountable for the content that’s on their platforms. What sort of impact do you think that could have on limiting anti-Muslim hate speech and hate speech in general?
Ahmed: Well, the key thing to most of the legislation is they hold them to account to the standards that they set for themselves. And so by saying that your failure to meet standards which you’ve incorporated into your community standards, you could be liable for damages. And that is, you know, that’s a perfectly reasonable way to regulate that industry.
Adams: What does this sort of hate speech look like across the different platforms? Are there differences in the way that it shows up depending on which platform you’re on?
Ahmed: The truth is that Twitter, for example, is used primarily to affect, because Twitter is really quite a small platform. It has about 200 million users, and they tend to be richer, wealthier elites. So it’s where you go to try and affect elite discourse and political discourse, media discourse on Muslims. Facebook is where you drip, drip misinformation. You spread a bit of misinformation every day over a year, so that you slowly color the lens through which users see the world. Instagram, YouTube, TikTok, they’re often used as evidence points where people put richer information, misinformation, in order to persuade people, and that’s linked to from Twitter, Facebook and other spaces. So they all form part of a cogent ecosystem which is used by bad actors to spread misinformation and hatred. There is a real-world cost for hate online. The mobilizing ideology behind [the 2019] Christchurch massacre was also that mobilizing ideology, the “great replacement theory,” that led to the massacre of Jewish people in the Tree of Life synagogue in Pittsburgh. If these platforms haven’t learned their lesson by now, we need the hard backstop of legislation and regulation to ensure they do.
Related links: More insight from Kimberly Adams
In the full report, CCDH tracked a sample of 530 posts containing anti-Muslim hate speech or content and found the images, videos and messages on Facebook, Instagram, Twitter and TikTok were viewed more than 25 million times.
A YouTube spokesperson responded to our request for comment and provided the following statement:
“YouTube’s hate speech and harassment policies outline clear guidelines prohibiting content that promotes violence or hatred against individuals or groups where religion, ethnicity or other protected attributes are being targeted. Of the videos flagged to us by CCDH, five have been removed for violating our hate speech policies and eight have been age-restricted.”
Jack Malon, YouTube spokesperson
YouTube also said that in the last quarter of 2021, it removed more than 410,000 videos for violating its hate speech and harassment policies and that 74% of those videos were taken down before they had more than 10 views.
While Twitter, TikTok, Facebook and Instagram did not provide us with public statements on the report ahead of our deadline, all have policies against hate speech on their platforms.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.