Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
Regulating generative AI will be challenging
Jun 6, 2023

Regulating generative AI will be challenging

HTML EMBED:
COPY
Alex Engler of the Brookings Institution says the U.S. might consider regulating how artificial intelligence is used rather than the technology as a whole.

The European Union is getting closer to approving the world’s most comprehensive artificial intelligence regulations. Here in the U.S. — well, at least we’re not defaulting on our debt, right?

Fast-moving developments in generative AI tools like ChatGPT and Stable Diffusion have raised a slew of concerns over misinformation, copyright violation and job losses. But even the EU’s AI Act — years in the making — wasn’t crafted with this kind of general-purpose AI in mind, these broadly accessible programs that have almost infinite applications.

Marketplace’s Meghan McCarty Carino spoke with Alex Engler, a fellow at the Brookings Institution who studies AI governance. The following is an edited transcript of their conversation.

Alex Engler: There’s a bunch of good ideas out there. Things like making sure you publicly document the underlying data sources of a large language model or large video model, which also might help people know if it uses copyrighted data or data that wasn’t obtained legally. Requirements around data governance, making sure that people who build these models actually spend time ensuring that there’s not really toxic or really discriminating data used in the training, which can often perpetuate into whatever downstream uses it has.

Meghan McCarty Carino: Most of the popular tools we’re talking about are not open source. How would greater transparency around these models be helpful?

Engler: So one thing transparency does is it grounds sort of public conversation in reality, makes sure that we’re talking about the fact that these are better text prediction machines and not magic. And that actually improves the function of markets, if you really know what you’re buying. Also, something the Federal Trade Commission has made a lot of noise and some progress on. And another is the public understanding, which is receiving a lot of mixed signals. Right now, you’re certainly seeing a lot of public debate about broad existential risks of AI, which really kind of overshoot what a lot of these models can really do. And so there’s value in the public really understanding what’s honestly at stake, what are the advantages and limitations of these models. And then a third. With open-source models, researchers can take a crack at them, they can say, “Hey, this is what they can really do, and they can’t do. And here are some alternative futures. Maybe if you took this model and you changed this set of data or this training method or this fine-tuning approach, well that would make it more fair or more effective or more safe or harder to misuse.” And that’s a real advantage of the sort of best-case scenario for transparency, which is fully open-source models that sort of can get this public scrutiny and feedback.

McCarty Carino: A lot of these very popular tools have not been in the public domain for a superlong time, but they have become very, very widely adopted, kind of creating a lot of facts on the ground. I mean, how hard is it to regulate technology that is already so widely in use?

Engler: I think maybe the uncertainty about exactly what to do and what problems we’re trying to solve is, in some sense, maybe the biggest source of uncertainty. Lots of people are like, “Oh, we’re going to see floods of disinformation.” Other people are saying, “We’re going to see tons of job market disruption and job replacement using these technologies.” And most of the things that we talked about, unfortunately, kind of the different policy interventions in different spaces. So there is sort of a risk of exhaustion and overstepping what is most effective or not using the right tools in the right spaces.

McCarty Carino: What are some of the sort of specific areas that you think we should be watching when it comes to potential harms?

Engler: You know, I think the scams are unfortunately quite real, especially any sort of security that’s based on people’s voices, or even at some points, unfortunately, people’s faces, which we have been building more into as a sort of a security step. I am worried about that. I also think it’s fair to be concerned about increasing the automated disinformation. And all that follows into the kind of malicious use of AI. And that’s definitely the harder part of this to prevent. Once these open-source models are out there, which, again, has very real benefits on public understanding, on shaping the corporate narratives around these tools. It’s really beneficial in a lot of ways that there are open-source models. It also makes it totally impossible to prevent their malicious use.

McCarty Carino: What would the risks be of regulating an emerging technology like this too soon or too much?

Engler: I think the worst-case scenario is that regulation manages to be anti-competitive in that if you create lots of process requirements, those could advantage the larger players. I think there are pretty clear ways to avoid that, which is again thinking about tailoring requirements to the scale and scope of use, which matters because the more people who are using this, the more impactful it is. And that’s an OK standard by which to also have sort of higher compliance and higher regulatory requirements. And the European Union’s Digital Services Act has sort of a model for this. If a certain number of users, 45 million, use a specific platform, it’s under significantly higher scrutiny than much smaller services. And there’s no reason we couldn’t replicate something like that for some of these particularly impactful generative AI systems if we decide that regulating sort of at the model level, that is, regulating these AI systems themselves, is really the right place to do it.

McCarty Carino: Tell me more about the European Union’s AI Act. How long has that been in the works? And to what degree does it capture some of the concerns that we’re now having about these new generative tools?

Engler: So the European Union has been working on their AI Act for several years now, releasing the first draft in April 2021. And it certainly wasn’t originally aimed at generative AI. It was aimed at this broader set of problems of what happens when you use AI in high-risk circumstances like hiring, like finance. And more recently, the European Parliament has been tackling the issue. Their take, one of the three European bodies that’s working on the bill, they are scheduled to vote on their final bill on June 14, and then it would go to something called the trialogue, where these different European bodies come together and debate which version and what aspects of each bill, draft bill, they want to pass. But the European Parliament, because they’re the most recent to take this on, has had to grapple with what to do about generative AI. And they really, again, in just the generative sense, that is what to do about these models that create really compelling language and aesthetically beautiful imagery, they’re not doing that much. It’s really mostly this disclosure requirement. And then a set of requirements that I mentioned before, kind of data sourcing, data governance, looking at testing and risk mitigation, independent evaluation. They’re really putting those requirements almost just on a broader category of models that are just large, important models, what they call, Stanford sort of coined the term, foundation models. So the generative side is only kind of getting one part of this. And it’s a pretty limited set of requirements, really narrowly on the generative aspect.

McCarty Carino: Watching what has happened in the EU, I mean, what does it kind of tell you about the challenges and the potential for regulating this technology?

Engler: One is that trying to tackle all of AI all at once is really hard. We should be thinking about incrementally adapting governance over years and decades in a kind of whole-of-government approach to handle the many aspects of AI. That lets you do more specific work within sectors, and probably more specific guidance that isn’t so driven by the type of technology itself and is concerned more specifically with its use in a specific field. So that’s one of the really important lessons from the EU AI Act. Now, of course, the criticism in the U.S. that’s also very valid is we’re not moving fast enough. And by working kind of piecemeal, agency by agency, we’re leaving big gaps. And that’s especially true for online platforms, for the generative AI that’s going to be built into search for which we have functionally no regulatory regime, or even AI systems built in sort of other types of websites, maybe around legal information or around medical information.

McCarty Carino: How do you see the U.S. tackling potential AI regulation?

Engler: I am, early steps, encouraged by the interest, the fact that policymakers want to get involved and find new approaches and steps to take to address these challenges. But also, I think, we don’t want to be naive and suggest that this is going to be easy. This could be a 15-year fight by civil rights organizations to find a way to expand consumer and civil society protections from AI.

More on this

This is not the first and certainly not the last conversation we’ll have about regulating generative AI. In April, we spoke to technology attorney Elizabeth Renieris at Oxford about how existing legal frameworks can be applied to these new tools. Things like copyright or antitrust law. She warned that too often when we think about regulating new technologies, we focus on all the novel things these tools do instead of the longstanding legal frameworks we rely on to protect people and businesses from harms.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer