Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
Coronavirus conspiracy theories don’t go viral by accident
May 12, 2020

Coronavirus conspiracy theories don’t go viral by accident

HTML EMBED:
COPY
Getting popular influencers to spread your message is a key strategy, says Renée DiResta of the Stanford Internet Observatory.

If you’ve gone on the internet in the past week, you’ve probably heard about “Plandemic.” It’s a viral video full of misinformation about the coronavirus, featuring a discredited scientist who accuses people — like Dr. Anthony Fauci of the National Institutes of Health and philanthropist and Microsoft co-founder Bill Gates — of using the COVID-19 outbreak to seize political power.  

YouTube, Facebook and Twitter have been trying to chase the video off the internet, but experts who follow disinformation say it’s not an accident that it got so big in the first place. It’s an old playbook that’s even more effective in a time of fear and uncertainty. 

I spoke with Renée DiResta, the technical research manager at the Stanford Internet Observatory, who studies discourse online. The following is an edited transcript of our conversation.

Renée DiResta. (Photo courtesy of Stanford University)

Renée DiResta: In the most recent viral misleading video that went up on Facebook, there was a strong marketing component to it, because one of the people involved had a book to sell. What you see is the same text will be used, like a particular core message produced almost the way that you would see from a PR team, where the message goes out alongside the link. There’s a real attempt to get it all out and right around the same time in hopes that you can trick one of the algorithms that amplifies popular content on platforms into thinking that it’s seeing something that’s just simultaneous outpouring of organic interest in a person or topic.

Molly Wood: What role do other influencers play in getting this into the mainstream? Is part of the campaign to have a list of ideal influencers that you would love to get to retweet your content?

DiResta: Yes, absolutely. If you could get it to a sufficient number of “blue check” accounts [on Twitter] that have a million, 2 million followers or audience members, you’ve really ensured that you’re reaching an existing mass audience. People think that those accounts are more authoritative. A trusted influencer with a large audience really does a lot to both reach people and also create an environment in which the information is coming from someone that you trust. 

Wood: How can journalists avoid amplifying these claims? What is the right response to a coordinated misinformation strategy, and how do you even identify it quickly enough?

DiResta: With regard to this particular video that went viral with the discredited scientist, that was actually the third or fourth video that had been posted to YouTube that did get significant traction. Those of us who watch the anti-vaccine and health misinformation communities, really, this was like a train wreck in very, very, very slow motion. People had seen indications that this marketing effort was really trying to turn [Judy Mikovits] into an influencer. The problem was, nobody wanted to write the story, including researchers, because you don’t want to give it oxygen if it’s just going to stay confined to the communities that are the natural affinity places for it. Then you see the debunking pieces begin to go up. The challenge is at that point, the viral information has had a chance to really sit with people for a couple of days and the correction is not going to go as far as it did. This is where I do think that platforms bear a little bit more responsibility in terms of when you begin to see a video or a post or repetitive content about a person begin to gain traction in what looks like an inauthentic or coordinated way, that’s where there’s an opportunity to temporarily throttle while you have the fact-checker go and look and see what kind of information this is and how it should be dealt with.

Wood: Are you saying that platforms may have the technology or just the eyes on to be able to get to it more quickly?

DiResta: You can see when velocity of mentions — is what it’s called sometimes — [is] beginning to do well. If Coca-Cola is all of a sudden mentioned 500 times in a minute, it’ll flag it for their brand monitoring people who will then go and look and see what’s going on. So these tools for social listening and understanding that kind of dynamic, they do exist.

Wood: Let’s say the content gets taken down, in which case the conspiracy theorist becomes a digital martyr. Is one of the goals actually to be taken down so then you can claim that you were censored?

DiResta: What we saw in the prior takedowns of this video that achieved mass appeal — it was actually the third video in which they declare exactly what you’ve said, that someone somewhere is trying to keep people from knowing the truth. That’s why in some cases, the better response is, rather than doing the takedown, just ensuring that the accurate information is attached as quickly as possible.

Wood: What are the opportunities this time exactly? There have been a lot of conversations about misinformation, about manipulating platforms, platforms have tried to take a hands-off approach and they’re now essentially saying “The virus is a clear and present public health danger, we can act more aggressively because the benefits are indisputable from a health perspective.” Does that give us an opportunity for research or lasting change or data or really understanding misinformation better?

DiResta: Some of the work that we’re doing at Stanford actually looks at the initial application of those policies. The coronavirus misinformation policy is an extension of policies that were put in place in 2019, related to the measles outbreaks that were repeatedly happening. When the Brooklyn measles outbreak struck, the platforms did begin to take steps to reduce the amount of reach that anti-vaccine groups were getting. They did that in some very basic ways. They stopped promoting them in the recommendation engine. They stopped accepting money to run ads on their behalf. The challenge, the interesting thing that we’ve seen come out of this, is that the policies were designed to amplify the CDC and the World Health Organization and their information. But during the coronavirus, there was the additional challenge of health institutions trying to figure out what was going on at the same time as everybody else. There wasn’t always great information to be amplified. Because with something like hydroxychloroquine and whether it works, a scientific authority isn’t going to come out with a strong, firm judgment on that prior to the research being done. There’s this gap where people really want information, but the science isn’t there yet, so what fills the void is whatever any random person with a sufficiently large audience produces about hydroxychloroquine. There’s just not a whole lot of authoritative information for the platforms to be amplifying. This has been an interesting time with just understanding how you get information to the public when there’s no information.

Related links: More insight from Molly Wood

If you don’t find yourself too enraged by the level of willful conspiracy believing from your family and friends on Facebook, The Atlantic has a piece from last week on what to say to people who share links to things like the “Plandemic” video, or other debunked disinformation. It does say that you’ll have the most luck with people who are curious or uncertain, wondering if a thing could possibly be true, and that you should try to empathize with the fear and uncertainty. It says, “don’t lecture someone, don’t get exasperated, don’t insult them, and don’t try to refute specific falsehoods.” This is, at times, a gargantuan task. I will forgive you if you just have to walk away. 

Twitter said Monday it will warn users when tweets contain information that’s either in dispute or could be misleading and try to link to additional context. When appropriate, Twitter said it will try to remove posts only when they’re deemed to be actually harmful.

One part of this story is Amazon’s role in the disinformation ecosystem. The debunked scientist who stars in the “Plandemic” videos is, as Renee DiResta said, selling a book. That book is No. 5 on Amazon’s bestseller list at the time I’m writing this. It’s been as high as No. 1 in recent days. The company has come under fire before for promoting books that contain health-related misinformation. Last year, Amazon removed some books that claimed to cure autism, but the company told a Wired UK reporter that this book does not violate its content guidelines.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer