Taking down content is not censorship. It’s business.
On the show recently, I talked about tech companies and social media platforms regulating speech — banning President Donald Trump and other accounts, removing groups and topics and even booting Parler off of app stores and Amazon web hosting. And of course, there’s been a lot of backlash and claims of censorship and questions about whether speech on social media should be regulated by the government.
All of that gets us to a topic that’s worth revisiting right now, which is the First Amendment. I spoke with Berin Szóka, the president of the nonprofit TechFreedom. He says, first of all, we have to get our vocabulary right. The following is an edited transcript of our conversation.
Berin Szóka: When Facebook bans somebody, that’s not censorship. Censorship is when the government makes a private company take something down — period, end of story. So we really need to stop using that term in that broad way. But if you want to have social media, and you want to have it moderated responsibly, you need Section 230 [of the Communications Decency Act] to do both things: to protect against lawsuits for hosting content but also ensure that websites can take down egregious content. That’s what’s allowed the internet to develop, and it’s not perfect, but that’s just how speech is, and we can’t regulate the things people are complaining about because of the First Amendment.
Molly Wood: One other wrinkle that I find interesting, though, is this question of competition. And one of the complaints about, for example, Parler being kicked off of the Apple App Store is that it also reduces competition.
Szóka: Antitrust law doesn’t protect private media companies from abusing their market power in economic ways. But the First Amendment does protect private media companies when they make editorial decisions about what kind of content they want to host. So Google, Amazon, Apple, they are 100% within their First Amendment rights to refuse to do business with a company that allows its users to incite murder and that allows its users to disseminate images of children being sexually abused.
Wood: Should we have any concern about platforms abusing that sort of legal right in the future, though, to remove competition? Or do you think this case is fairly contained?
Szóka: I’m not saying that you shouldn’t be concerned. I’m saying that the Constitution forbids the government from doing anything about this. Look at the fact that corporate America has cut off the flow of money to Republicans who refuse to acknowledge the election results. Bring that kind of pressure to bear on companies that advertise on these services. And these services will, on the one hand, take greater responsibility for truly heinous content on their services. And then, on the other hand, if you’re concerned about these services having too much power, look, these services come and go. No one is locked in forever. If they do abuse their market power in economic ways, the antitrust laws are still there as a potential remedy.
Wood: OK, so now, this is all very crystal clear from a legal perspective. We do love to overcorrect in this country. And there are conversations about potentially enacting hate speech regulation, the Fairness Doctrine coming back. Where do you think we are going to go from here as a society, whether or not the legal arguments are crystal clear?
Szóka: I think Congress may try to do some of those things. But hate speech regulation is never going to pass muster under the First Amendment in this country. There is a very limited window for Congress to legislate in ways that the First Amendment could permit. And the case to look at here is the [Xavier] Alvarez decision, where the court most recently dealt with false speech. And they ruled that in general, false speech is protected by the First Amendment. And the exceptions they looked at were for things like perjury, which undermine the basis of our legal system. I think if you start with that case, you might be able to craft very narrow, very targeted laws that might, for example, prohibit people from misleading voters about the time and place of elections, or whether you can vote online — things like that. That would make a difference. But the First Amendment is never going to allow the criminalization or punishment of people just falsely claiming that the election was stolen. And I don’t like that, you don’t like that, but the First Amendment is really difficult. And we’re stuck with it unless we can get a constitutional amendment passed.
Wood: With respect to these big platforms, does it seem like this is, in some ways, business working as it should? Like, they’re evolving to the point where it is a business imperative to, for example, do something about the misinformation problem?
Szóka: Yeah, 100%. Look, Twitter and Facebook have exactly the same business model that every newspaper and magazine have always had, which is to get advertisers to be willing to put their ads next to content, and that includes content that comes from third parties, just like letters to the editor. Facebook and Google and Twitter have a strong incentive to ensure that their advertisers are comfortable with the content on their service. That’s different from Parler. Parler doesn’t care. Parler’s business model is not to sell ads to companies that care whether their products are being advertised next to Holocaust denial. Parler’s business model is to sell the opportunity to market to the craziest people in America hate speech and political ads for Republican candidates, and then, things like gold bars and herbal Viagra supplements. That’s fundamentally the difference between Parler and Twitter.
Wood: So what do you think happens over the next couple of years? There is obviously a lot of rhetoric around Section 230, around various types of regulation. Do you think that it’s going to be a couple of years of arguing, or do you think some narrow regulation might actually emerge?
Szóka: I think Congress is under enormous pressure to do something. The most likely thing they’ll do would be to try to craft some kind of transparency mandate. And they would say, like the [Preventing Animal Cruelty and Torture] Act did in the last Congress, they would say that they’re not trying to interfere with the decisions that companies make. They’re just trying to require greater transparency. I understand what they’re trying to do, but that raises another set of First Amendment concerns that essentially boil down to this: You can’t require Fox News to disclose how it decides which guests to invite; you can’t force The New York Times to make the same kind of disclosure about which op-eds it runs. You can’t, for the same reasons, compel social media companies to describe in detail where the line is on Holocaust denial or racism. So these ideas, these bipartisan ideas, are well intentioned, but I don’t think they will stand up in court either.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.