The AI safety bill dividing Silicon Valley
Sep 4, 2024

The AI safety bill dividing Silicon Valley

HTML EMBED:
COPY
Tech companies asked for regulation of artificial intelligence, but there's little agreement on the California Legislature's approach. Chase DiFeliciantonio of The San Francisco Chronicle says Gov. Newsom faces a difficult decision and isn't showing his hand.

Depending on whom you ask, a bill passed by California lawmakers last week could either save us from imminent AI doom or strangle innovation in Silicon Valley.

The bill, SB 1047, is one of the first significant attempts to regulate artificial intelligence in the U.S. It’s supported by some high-profile voices in tech like Elon Musk. The bill would require AI developers to include a “kill switch” in their models and would hold big tech companies responsible for any harms their products cause.

But critics of the bill — including former Speaker of the House Nancy Pelosi and tech companies OpenAI and Meta — say the regulation could stifle growth in Silicon Valley. 

Marketplace’s Meghan McCarty Carino is taking a closer look at the arguments for and against SB 1047 with Chase DiFeliciantonio, a reporter at The San Francisco Chronicle who has been following the bill’s journey through the California Legislature.

The following is an edited transcript of their conversation.

Chase DiFeliciantonio: The folks who are supporting this essentially are saying, look, the big AI companies have been asking for about a year now, in some cases longer, to be regulated and that they want to see some kind of regulation. They’re essentially saying that they are building really powerful AI programs and really powerful AI models, and that they want the government involved. They don’t want this to go off the rails and have their programs be used for something really bad. And so that’s the fundamental argument here, that there’s been a lot of voluntary agreements with the White House, with federal agencies, but at the same time, those aren’t really terribly enforceable. And so, there’s really the sense that everyone has to be on the same page when it comes to testing requirements and what those will look like.

Basically, the argument is that there’s really significant harm and damage that could be caused by increasingly more powerful AI models. It seems like every few months Meta or OpenAI or Anthropic, you name it, releases some update that has a really surprising or powerful or fantastical new ability. And so, the argument is that that needs to be regulated and subject to some kind of safety testing. I think the hitch there is that no one can really agree on what exactly that safety testing necessarily should look like.

Meghan McCarty Carino: Right, and so this bill has stirred up quite a lot of opposition from many different directions. What are some of the arguments that critics are making?

Chase: They are legion. The first one — going off what I just said — is that folks have said the testing requirements here are too vague. That, in this industry, there aren’t broadly accepted safety-testing benchmarks for really big programs and models. The other thing that comes up a lot is for companies like Meta, Facebook’s parent company, which makes what are called open source models that are free to use and any developer can get in there and basically repurpose it to build an app to do whatever they want. This is seen as one of the ways that startups and small companies are really going to use AI to innovate and to create things that are going to make our lives better or not, and to really change the technology. And the idea here is that this bill, SB 1047, is potentially going to create a lot of liability for companies like Meta, for smaller companies, if something goes wrong intentionally or not, and it’s going to chill the whole industry.

So, Meta has really come out strongly against this bill. They’ve written letters, they’ve lobbied pretty hard against it and they’re saying it’s not great for their business and it’s really not great for kind of the smaller developer subset folks who are using open source, and it’s going to make people pull away from using the technology. Because maybe even if they don’t mean to, something could go terribly awry, and they would be on the hook for it and they would have the attorney general knocking on their door.

McCarty Carino: We’ve heard from researchers like Stanford’s Fei-Fei Li, widely considered a godmother of AI, who has made kind of similar objections, saying that this could really have an impact on the research and the development environment for the whole AI ecosystem, right?

DiFeliciantonio: Absolutely. I mean, Fei-Fei Li wrote an opinion piece outlining a lot of these arguments pretty straightforwardly. I think for a lot of the public, they might be a little in the weeds, kind of inside baseball kind of thing, but her argument is that it’s not just going to have this potentially chilling effect on the industry, but that also it’s going to have this negative impact potentially on academic research. Because a lot of that research also relies on open source AI models, and folks trying to find the next cancer cure or whatever it may be, are using one of these programs and they might be a little skittish about doing that because they’re facing state liability. So, this is kind of the balancing act that state Sen. Scott Wiener, the bill’s author, has been trying to deal with since he announced this bill earlier this year.

McCarty Carino: You mentioned Meta, but a number of big tech companies in this space, including OpenAI, the maker of ChatGPT, has also sent a letter to Sen. Weiner opposing the bill. What do these AI giants say would be lost if this bill passes?

DiFeliciantonio: I think they kind of paint it as essentially this big unknown. They don’t know if this will necessarily work. It’s kind of a new way of doing this and it could have all these unintended knock-on effects. Some companies said, “Oh, we’ll have to leave California to avoid the provisions of the bill,” but I’ve asked Scott Wiener about that, and he doesn’t really buy that argument. He mentioned that’s kind of been their argument for every piece of tech legislation he’s brought to the fore. This bill also has provisions that it affects companies that have operations in California.

But the big tech companies are very much saying, “Look, this is a big industry and there’s a lot of money being poured into this.” It’s been reported that OpenAI is raising more money at a $100 billion valuation. I mean, these are mind-boggling numbers. And going forward, this is an industry that presumably we want to remain in California, to continue to create this new economic driver in tech, and that potentially, this bill could have a chilling or negative effect on this. Again, it’s not clear entirely if that’s going to be the case, but those are the arguments that they’re making.

McCarty Carino: As you noted, tech leaders have been saying in remarks before lawmakers that they support regulation of the AI industry, but now that we have the most fully realized version of that in California, it seems like they don’t support it. What’s your read on that?

DiFeliciantonio: This is one of the most interesting parts of this. There’s this question of, what do you want to see in terms of regulation in your industry? And there’s not a clear answer to that. I think every company would have a different answer. One thing that comes up a lot is that they don’t want the state to put the onus on the big companies and on the big model makers to do the work of guaranteeing safety that they’re basically saying they cannot guarantee. They want to put that onus on someone who’s using the model for ill, who’s using it to attack the power grid or cause some sort of horrible, catastrophic damage.

One company that I mentioned earlier, that’s actually really been engaged throughout this process with amendments on the bill, is Anthropic, which is also based in San Francisco, founded by a number of OpenAI alums and maker of the Claude chatbot. And they talk about being focused on “constitutional AI.” They talk a lot about safety and building safe programs. And so, they haven’t really come out fully in support of the bill. They said it’s probably OK, and that’s after they suggested a bunch of amendments that Sen. Scott Wiener pretty much took and included in the bill. He’s really tried to make a point of saying that he wants to work with these AI companies to make this the best it can be. So, some companies have said, “This bill is dead on arrival, and it’s not something that we can accept in any form” and others have said, “We want to try to work with you and at least build out on what is already being written.”  

McCarty Carino: The bill is now on California Gov. Gavin Newsom’s desk. He has until Sept. 30 to either sign or veto the bill. Any clues about which way he might be leaning?

DiFeliciantonio: The governor has been pretty silent on this one. He did not write a letter or try to involve himself in the legislative process, which was lengthy for this and there were a lot of amendments. For Newsom, signing this bill would anger a lot of people in both so-called little tech but also in the big companies up and down the ladder in the industry. If he vetoes it, he could really open himself up potentially to criticism that powerful tech companies threw a lot of money behind a lobbying effort and sank a bill that is supposed to keep the rest of us safe. The arguments here are myriad, but it is indeed a difficult decision that he’s going to have to make.

More on this

In October, President Joe Biden issued an executive order to start developing standards for how the most powerful AI systems are developed and deployed. Being an executive order and not an act of Congress, it is somewhat more limited in its scope and enforceability.

Former President Donald Trump’s platform calls for reversing the order, and while Vice President Kamala Harris has taken a leading role in shaping the administration’s approach to AI, her campaign for president hasn’t fully clarified the policies she might pursue.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer