Make a difference in our non-profit newsroom... and help Marketplace meet our year-end goal! Donate Today 💙
ChatGPT is a content host and creator. Does that make it liable for what it produces?
Mar 2, 2023

ChatGPT is a content host and creator. Does that make it liable for what it produces?

HTML EMBED:
COPY
Matt Perault of UNC Chapel Hill says Section 230 won't protect generative AI tools, and that could have social consequences.

So much of the internet we have today rests on the bedrock of a federal law that shields tech companies from liability for the content users post online. Everything from the AOL chatrooms of yore to modern social media likely wouldn’t exist without Section 230 of the 1996 Communications Decency Act.

The idea is internet platforms aren’t acting like traditional publishers in creating content; they’re merely hosting it.

But new artificial intelligence tools like DALL-E or ChatGPT that generate images or text are kind of different, said Matt Perault, director of the Center on Technology Policy at the University of North Carolina at Chapel Hill. He recently wrote an essay for Lawfare, and spoke with Marketplace’s Meghan McCarty Carino about the implications of these tools not being protected by Section 230. The following is an edited transcript of their conversation.

Matt Perault (Courtesy Hunter Stark)

Matt Perault: Platforms like ChatGPT and other generative AI products will bear some liability, I think, at least in some courts, and I think there’ll be critics who will think that that’s a good thing. And the rationale there is by bearing more liability costs, those platforms will be incentivized to build their tools in ways that reduce the likelihood that illegal content is produced in some form. And if generative AI tools are stunted by legal risk, I think there’s going to be a lot of innovation that’s left on the table.

Meghan McCarty Carino: When it comes to this kind of dichotomy of creating versus hosting, I think there’s probably still a lot of debate to be had about whether tools like this are actually creating when they’re basically scraping existing user content from the internet in order to generate what they’re generating. I mean, do you think that there’s any daylight there to say that this is not creating content?

Perault: I certainly think that there is. And I think this is probably going to be a topic that’s actively litigated. Once there’s that possibility that generative AI platforms will be held liable in certain circumstances, that means the tools will be deployed and designed in ways to reduce that legal risk. I think it means there are fewer startups that will enter the market. When you lose that innovation potential, there will be consequences for society. If you believe that generative AI has the possibility of connecting people to better information, to enabling people to pursue various different things in their own lives that are important to them — when the tools are built in more limited ways and we realize less of that value, there are meaningful costs.

McCarty Carino: Yeah, how would you describe how important Section 230 protections have been to the development of our modern internet economy and what it would mean for this not to apply here?

Perault: They’ve been critical. So the scholar Jeff Kosseff has sort of famously said that Section 230 is the 26 words that created the internet. Some people, I think, think of that as just enabling tech companies to grow and be responsible for harmful social effects. But Section 230 actually is not just about liability protection so that platforms can host whatever they want; Section 230 is about enabling tech platforms to have terms of service, and to actually moderate content, and to actually remove harmful content from their services, and to do that without being worried about being dragged in court when they do that. So it actually has enabled not just hosting content, but also the content moderation that we have. And my view is that it’s probably the correct interpretation of the current law to find that generative AI tools fall outside of its scope in many circumstances. Not all; it’s a product-specific inquiry.

But I don’t think it is the right thing to have — if we want this technology to flourish and if we want it to be realized to its fullest social potential, for that to be the liability regime that we have going forward. The possibility of being dragged into court for the most expansive implementations of this technology will mean that we don’t see them in the way that we would otherwise. And I think that there will be social consequences to having the technology marginalized to narrow use cases that minimize legal risk.

McCarty Carino: What would be the worst-case scenario in terms of a challenge to Section 230 protections for this kind of technology — how could that play out?

Perault: So I guess my greatest fear is the one that’s hardest to quantify, which is: There is significant legal liability for enabling these tools to do the things that will be most beneficial to society in the long run. If you think that these tools are going to be harmful, if you think that the main thing that they will do is enable people to do bad things in the world and create harm to our society, then you will think that that’s a good thing. You will think that having a liability regime where they can get dragged into court fairly easily and there are steep legal costs to deploying this technology in the fullest way — if you have that view, then you’ll think this liability regime is a good thing. My view is that we couldn’t have anticipated 20 years ago the full range of tools that the internet would enable. And I understand that there are significant harms to many of those tools. But there are enormous, enormous benefits, and taking those benefits off the table and eliminating future innovation, my view is that that’s likely to have an enormous and wide array of harms in terms of barriers to entry and access information, in terms of people’s ability to use information in positive ways and access information in positive ways that create value to them and value to society. And the current liability regime makes it less likely that we will ever see those things.

McCarty Carino: If generative AI tools can’t be protected under Section 230, how would you envision a regulatory framework around this technology moving forward?

Perault: One is that we can have a policy experiment to better understand how generative AI works in practice. So you could set up this test for a limited period of time. During that time, there could be expanded liability protection. You could have that test sunset after that set period — so it could be 18 months or two years. And during that period, you could require platforms to have significant transparency, so they provide data to researchers, they provide data access to researchers so that researchers can study the harms and the benefits of the technology. You could require regular reporting and monitoring so that we are studying closely and looking in detail at what these tools do in practice so that we can learn from that.

Matt Perault’s full essay for Lawfare goes much deeper into the legal arguments and innovation implications.

And you don’t have to take his word for whether Section 230 will apply to generative AI. The Washington Post reports that Supreme Court Justice Neil Gorsuch has already waded into the issue.

During oral arguments last week, Gorsuch mused, “Artificial intelligence generates poetry. It generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected. Let’s assume that’s right.”

The case at hand last week was Gonzalez v. Google. It concerns whether recommendation algorithms, like the ones that populate your next video on YouTube, are protected by Section 230.

Last October on the show, we spoke to Santa Clara University law professor Eric Goldman about the implications of that case when the Supreme Court announced it would take it up.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer