Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
Ever watch something on YouTube and wished you hadn’t? You’re not alone.
Jul 8, 2021

Ever watch something on YouTube and wished you hadn’t? You’re not alone.

HTML EMBED:
COPY
Users watch 1 billion hours of videos every day, mostly recommended by the company's algorithms. A new study finds some of those suggested videos violate YouTube's own policies.

Most of what people watch on YouTube is recommended by YouTube’s algorithm. Finish one video on how to save a dying houseplant, and it might suggest more. But that system can also send users down rabbit holes that radicalize and misinform.

For almost a year, the Mozilla Foundation has been tracking the viewing habits of more than 37,000 volunteers who installed a browser extension letting them identify videos they called “regrettable.” Mozilla found YouTube’s algorithm recommended 70% of those problematic videos.

Brandi Geurkink, senior manager of advocacy at Mozilla, led the research and said some of those videos were seen millions of times before YouTube took them down. The following is an edited transcript of our conversation.

Brandi Geurkink (Courtesy Geurkink)

Brandi Geurkink: We were able to verify that YouTube, actually, in some cases recommends videos to people that violate their own content guidelines. So videos that are later taken off of YouTube for violating the platform’s own rules. Which is quite interesting, because it raises questions about whether or not the recommendation algorithm could actually be at odds with YouTube’s stated goals of trying to make their platform a safe and inclusive place for all of their users. 

Kimberly Adams: Why the focus for your research on YouTube in particular? 

Geurkink: YouTube is one of the biggest and most consequential [artificial intelligence] systems that people encounter. More than a billion hours of YouTube is watched every day, and the recommendation algorithm in particular, YouTube has said, drives 70% of that watch time. That’s an estimated 700 million hours that’s being driven by this algorithm that it’s impossible for the public to study and we know very, very little about. YouTube pretty much has complete control over that. We think that the impact that that can have on the public, on our democracies, is incredibly consequential, and yet, it’s so opaque. So that’s really what we’re trying to draw attention to and get them to make that more transparent. 

Adams: You did find that people in non-English speaking countries, especially Brazil, Germany and France, were more likely to report seeing videos that they didn’t want to. Why do you think that is? 

Geurkink: One hypothesis is that they’re prioritizing their policy changes first in the United States, and then in other English-speaking markets. So YouTube first said that they were rolling out policy changes targeted at solving some of the issues raised in the report in the United States in 2019. And this only came out a few weeks ago, they updated their pages to say now we’ve rolled out these changes in all markets in which [they] operate. 

Adams: You’ve been sharing your research with the folks at YouTube. What’s been their response?

Geurkink: They tend to criticize the methodology quite a lot, and then also highlight all of the progress that they have made on this issue, which sort of downplays some of what the research reveals. We like to highlight that the methodologies, not only of our research, but of a lot of people doing this kind of work, could be improved really significantly if YouTube released the data. So, it’s almost like the response reinforces the point in our campaign and in our advocacy work that in the absence of transparency and data from YouTube, we have to do things like make browser extensions to try to study the platform.

Related links: More insight from Kimberly Adams

YouTube told us it has made more than 30 tweaks to its recommendation system over the last year to help reduce the spread of harmful content. The company also said it tracks what portion of views on the site come from videos that violate policies on topics like COVID-19 misinformation or hate speech. That rate has fallen by 70% over the last four years, which the company says is partly due to its investments into algorithms that help take down harmful videos.

Mozilla’s full report has a lot more, including suggestions for how to limit unwanted recommendations. Tip No. 1? Turn off autoplay.

Back in 2020, researchers at Berkeley dug into the efforts YouTube has made so far to limit the spread of conspiracy theories on its site. One takeaway back then: while YouTube had some success initially removing things like flat-earth conspiracies from the site, shortly after YouTube announced that success, recommendations for those types of videos ticked back up again.

And the Brookings Institution did its own research into YouTube recommendations in the U.S. and Germany. The research found that “even when they are not personalized, recommendation algorithms can still learn to promote radical and extremist content.”

Because algorithms are still designed by humans, with all of our flaws.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Molly Wood Host
Michael Lipkin Senior Producer
Stephanie Hughes Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer