Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
Section 230 co-author says the law doesn’t protect AI chatbots
May 19, 2023

Section 230 co-author says the law doesn’t protect AI chatbots

HTML EMBED:
COPY
The Communications Decency Act provision shields platforms from liability for the things their users say online. Former Congressman Chris Cox says those protections end at artificial intelligence.

The U.S. Supreme Court delivered a win to Big Tech on Thursday, when it basically avoided weighing in on the limits of a key piece of tech law called Section 230.

It’s a segment of the Communications Decency Act that shields internet companies from liability for their users’ content. In recent years, it’s become a target for both legal challenges and political attacks. Add to the mix artificial intelligence, which is raising new questions.

Marketplace’s Meghan McCarty Carino spoke to former Congressman Chris Cox, a Republican who co-authored the law along with Democratic Sen. Ron Wyden back in 1996. Overall, he said, the law has held up after 27 years.

The following is an edited transcript of their conversation.

Chris Cox: Because there are always trade-offs involved, and no legislative solution will be perfect, overall the law has worked. It has largely worked to ensure that people have an opportunity to post content and to read others’ content in ways that they otherwise wouldn’t because no one would have been willing to take on that liability. Today, just as there were in the 1990s, there are trade-offs involved. As Congress looks at new cutting-edge issues, they’re going to go through this same exercise anew, I think. They’ll have to make new trade-offs that will hopefully protect free speech and hopefully give people more access to information, rather than less. We’ll see where it all ends up.

Meghan McCarty Carino: It feels like we’re at another inflection point with these new generative artificial intelligence tools like ChatGPT. In your view, does Section 230 protect speech generated by these tools?

Cox: The way that the law is written, it does not protect that kind of speech. Section 230 proceeds on the premise that the content creator is liable for their speech. So, when ChatGPT makes things up, in what they call hallucinations, you’ve got a couple things going on.

First is content creation. There’s no question that the computer is creating some original content. Second, you have falsehoods. Those two things are the oily rags in the corner of the basement that are likely to become a fire. We can already see the harm that could come if this proceeds along the current path, and particularly if the law somehow immunized those kinds of public falsehoods.

McCarty Carino: Last month, The Washington Post reported about this situation in which ChatGPT generated text that named a specific law professor and said that he had sexually harassed one of his students. The bot even cited a Washington Post article as evidence, but it was all a “hallucination.” None of what the bot had said was true. What are the implications here when we consider whether Section 230 protects ChatGPT?

Cox: Section 230 clearly would not protect that, in my view. That doesn’t mean that it’s not a problem.

There are great questions about what all of this misinformation that is being spewed out by these chatbots, and what will be required is recognition on the part of the developers of these applications that there are existing laws — and lots of them — that will penalize them if they end up creating harm with their inventions.

It’s also important to point out that there is an enormous amount of really happy and good things that will come from this technology. So, we don’t want it to go away entirely. These great leaps in technology are at once terrific and also scary when one thinks about the harm that could come if the technology is used improperly.

McCarty Carino: Sam Altman, the CEO of OpenAI, was on Capitol Hill earlier this week, where he testified before a Senate subcommittee about this technology. What did you make of his remarks?

Cox: It reminded me of the early days of talking about Section 230, when Democrats and Republicans were all at the same starting line trying to understand new technology. There was genuine inquiry, and people were humble. I think that means there’s a good opportunity here for people to work together to come up with the right answers from a legal standpoint so that we can squeeze all the benefits we want out of this technology and minimize all the potential harms.

McCarty Carino: Misinformation and disinformation online have been a growing problem. Some experts say that AI will only make it worse. Do we have the tools to address that issue if we’re not holding tech companies accountable?

Cox: The volume of information online is both a blessing and a curse. We have lots of sources of information, we have lots of access, and we have the right to publish. But the flip side of that is that there has to be some kind of reality check. The saying goes that a lie can get around the world before the truth gets its pants on. That is the flip side of this technology.

But implicit in your question is the idea that a potential solution would be to say that the platform itself is accountable for the truth or falsity of every piece of information that’s posted. If that’s the legal standard, then I’m afraid that information simply would never be posted in the first place. So, there’s no free lunch here. There’s no easy answer. I think we have to acknowledge that going in.

I think the best check on misinformation is to run it to ground and use the legal rights and remedies that we have to hold accountable the people who create the misinformation. The fact that it might be using new technology, or it’s an algorithm that does it, shouldn’t confound us. Algorithms are really, in a legal sense, no different than text. People in the end are going to be responsible, not algorithms. If the law takes that approach, I think we’ll have a ready solution here. We don’t need to be flummoxed.

The Supreme Court ruled Thursday in the cases of Twitter v. Taamneh and Gonzalez v. Google, two cases that sought to make social media platforms liable for terrorist content.

The court ruled that Twitter couldn’t be held liable for aiding and abetting terrorists under an antiterrorism law, which means they didn’t really have a chance to interpret Section 230. The Google case was kicked back to the lower court.

These decisions were highly anticipated because they could have radically changed how the internet works. We talked about the possibilities with Eric Goldman at Santa Clara University School of Law when the court decided to take up the cases last year.

But these aren’t the last big cases involving social media.

The Supreme Court delayed taking up some challenges to laws in Texas and Florida that attempt to regulate how social media platforms moderate content. Those laws have been blocked from taking effect in the meantime.

The high court will most likely hear the cases during its next session. Just in time for the 2024 presidential election.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer