Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
It’s not too late to change the future of AI
Nov 13, 2024

It’s not too late to change the future of AI

HTML EMBED:
COPY
Gary Marcus, author of “Taming Silicon Valley,” says we still have a choice in determining whether artificial intelligence will help or harm humanity. But the future shouldn’t be left in the hands of tech companies or the government alone.

Gary Marcus is worried about AI. The professor emeritus at NYU doesn’t count himself a luddite or techno-pessimist. In fact, he co-founded a machine learning startup that was acquired by Uber in 2016.

But Marcus has become one of the loudest voices of caution when it comes to AI. He’s chronicled some of the funniest and most disturbing errors made by current tools like ChatGPT, calling out the many costs – both human and environmental – of an industry that continues to accrete money and power.

In his new book, “Taming Silicon Valley: How We Can Ensure That AI Works for Us,” Marcus lays out his vision for a responsible path forward.

Marketplace’s Meghan McCarty Carino spoke to Marcus about that path and how it may be further out of reach, though not impossible, given the results of this year’s presidential election. The following is an edited transcript of their conversation.

Gary Marcus: The point of my book was to say the tech companies aren’t really going to self-regulate, and we can’t really count on the government, because the tech companies have so much power over the government. And so, the only thing that we can do as citizens is to stand up and say, this is not acceptable to us. So, for example, I talked about the notion of a boycott. You know, we could, as a group of citizens say, look, we like the idea of AI, we want to use AI, but the AI that we have right now is morally and technically inadequate. It’s damaging to the environment. It’s ripping off artists and writers. It’s discriminating against people. It’s also unreliable. Come back to us when you have an AI that we can trust, doesn’t have all of these negative consequences for society, then we’ll use it. In the meantime, we’ll wait. And the parallel here is, you know, for years, all these industrial companies would pour all kinds of chemicals into the environment, and we as citizens all had to eat the consequences. There’s a famous quote that goes, “privatize the profits and socialize the cost,” well, that’s exactly what’s happening right now with generative AI. The cost, if you’re defamed or the cost to the environment, etc., are borne by the citizens. The companies aren’t really doing anything about it, and they’re making profits. They’re ripping off the artist they’re making profits from, so we could say that’s just not cool. We’ve seen this movie before, and you should take responsibility for the consequences.

Meghan McCarty Carino: In your book, you write about the potential for something like a nonpartisan federal agency that oversees AI. What would that achieve?

Marcus: I think that’d be a good idea, whatever party is running the government for two reasons. One is that the potential of AI is enormous. I’ve been critical of it lately because I don’t like, either morally or technically, what we’re working with right now, but we will have better AI in the future. There are already some positive use cases around medicine and so forth.

So, on the one hand, you want somebody in Washington whose full-time job is to run a team of people that look at AI and say, how can we use this to our maximal advantage? Can we reduce the number of employees that are doing meaningless things? Can we make healthcare better, etc.? And on the other hand, to look at all the downside risks. So, we need to be looking at climate change. We need to be looking at cybercrime and so forth. We need somebody who is kind of running point on that and has the staff to be able to deal with it.

McCarty Carino: Well, let’s dig into some of the concerns that you enumerate in your book. We’ve talked about a lot of them on the show, both with you and with others. There’s the unreliability issue, there are economic harms, there are environmental harms, malicious uses for the technology, like disinformation. From where you’re sitting today, what are you most worried about? What seemed to be the most pressing issues?

Marcus: I mean, all of those are still pressing. And I guess if there’s one that we didn’t talk about before, or maybe a pair of them, it’s surveillance and kind of influencing people’s thoughts. One thing that has become clear to me is a company like OpenAI may be forced into a kind of surveillance that we haven’t really seen before. So, they’re not really making money. They’re losing money, and what we’re finding is that large language models are not particularly reliable, and as I’ve been warning since 2001, architectures of this sort are prone to hallucinations and so forth. And so, what’s happened is that OpenAI isn’t really making money. A lot of people tried it out, they piloted the stuff. A lot of big companies tried it out, and then they’re like, you know, it’s promising, but doesn’t really do what we needed to do. So, some companies may be using it, but mostly when I talk to executives, they’re still kind of in a wait-and-see phase, and so I don’t think that that’s going to come up with a tremendous amount of money that OpenAI needs to pay its staff, which are extremely expensive, and to pay for the chips that they’re using, which are extremely expensive, and so forth. So, they’re actually poised to become a surveillance company. Why? Well, first of all, people spill their hearts out into ChatGPT, right? They often put their company’s private data into ChatGPT. So that’s part of it.

Also, OpenAI recently bought a share in a webcam company, which I think is a kind of sign, and then they put Paul Nakasone, who used to be at the NSA, on their board. And so, I see a lot of moves for OpenAI to become a surveillance company.

And then it’s coupled with a second concern, which is that you can shape people’s opinions with large language models. And so those who control LLMs, generative AI chatbots like ChatGPT and so forth, have an enormous power. We’ll see how they use that power to shape our thoughts. I mean, do you think Elon Musk, who owns xAI, would hold back from influencing other people’s thoughts, given how he’s been running X? Things could change pretty radically in American life pretty quickly using these technologies. And the meta worry here is that whatever hope we had of regulating these things in the United States probably just diminished greatly with the outcome of the election.

McCarty Carino: One policy you write a lot about in the book as being sort of ripe for reform is Section 230. This is a section of a law passed by Congress in 1996 which exempts internet companies from liability for things that users post online. Why is this important to AI?

Marcus:  I think both parties have been upset about the consequences of Section 230 and so when I testified in the Senate, it was clear that people in the Senate did not want 230 to apply to AI. If a chatbot, for example, defamed somebody, should it be responsible for doing that? If Section 230 applies for AI, then the AI company making the chatbot is automatically exempt. So, I understand the strong inclination of the American people around the First Amendment. I am an American citizen, I love the First Amendment, but it’s one thing to sort of lie retail and another to do it wholesale. Like, do we want Russian botnets to be able to put out a billion pieces of misinformation a day without any consequence? I don’t think we want that. I think that we want some restrictions on truth and hate speech and so forth, especially in volume. Some people are going to react strongly to that, but that’s my view.

We’ll see how things play out in the next administration, but there’s a lot of things that lead to inertia, and there’s a lot of money for the tech companies. And you know, the tech companies are enormously wealthy. They have fantastic lobbyists. They have their pick of the best people in the world to do that. They have essentially endless budgets, and they have proximity to Washington, D.C. So even if everybody in the government says we should repeal 230 or we’re restricted in some way, that doesn’t mean it’s going to happen.

McCarty Carino: The last time we spoke on the show last year, you were very focused on the threat of AI in the 2024 election. I believe you said you thought it could be a “train wreck.” How did it measure up?

Marcus: I don’t think we actually know the answer to that yet, and I don’t know if we’ll ever know for sure. I think we do know there were a lot of deep fakes generated. There was a lot of propaganda spread. I probably underestimated the intensity with which Elon Musk personally would participate in spreading misinformation, and maybe that outweighed the AI, and maybe not, and maybe AI played a role in generating and distributing misinformation. In a lot of ways, the election was a kind of information war. I suspect that AI played a part in that, but it’s very hard to know.

More on this

As we mentioned, the last time we had Gary Marcus on “Marketplace Tech” back in 2023 was to discuss his support for the “AI Pause” letter. This was an open letter from the Future of Life Institute calling for a six-month pause in the development of advanced AI models.

It was signed onto by many big names in tech and AI research like Steve Wozniak, Yoshua Bengio and Elon Musk, the man who now has the ear of the president-elect.

Of course, since that letter, Musk has gone on to push ahead with his own AI company called xAI, but he has maintained support for some AI regulation, including a sweeping California bill that was eventually vetoed by California Governor Gavin Newsom and was opposed by many AI competitors.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer