Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
It’s imperative – and nearly impossible – to contain artificial intelligence, expert says
Sep 7, 2023

It’s imperative – and nearly impossible – to contain artificial intelligence, expert says

HTML EMBED:
COPY
In his book "The Coming Wave," AI innovator Mustafa Suleyman describes the challenges and dangers he sees as the technology quickly advances and becomes widely accessible.

When Mustafa Suleyman co-founded the AI research company DeepMind more than a decade ago, his goal felt ambitious, even a bit far-fetched: to build a machine that could replicate human intelligence.

Now, he says, rapid progress in the development of artificial intelligence means the goal could be met within the next three years, and the implications of that milestone are huge.

Suleyman explores those implications in his new book, “The Coming Wave,” which came out this week. Marketplace’s Lily Jamali spoke to Suleyman, now CEO and co-founder of Inflection AI, about a core theme of his tome: the idea of containment.  

The following is an edited transcript of their conversation.

Mustafa Suleyman: The idea of containment is that we should always have the ability to slow down or potentially even completely stop any technology at any period in its development or its deployment. It seems like a kind of simple and reasonable idea. Who wouldn’t want our species to always have control and oversight over the things that we invent? But it is, I think, the big challenge of the next few decades, precisely because of the pace of change with AI and synthetic biology and how quickly things are improving.

Lily Jamali: In your book, it feels at times like you’re arguing for containment while also making the case that, to some extent, containment is impossible. Is that a fair assessment of the argument you’re laying out?

Suleyman: Yes, exactly. I think that when you look at the history of all technologies, things get cheaper and easier to use and they spread far and wide. Everything from the hand ax to the discovery of fire to the invention of steam and electricity has got cheaper and easier to use over time, and everybody has got access. If that is the nature of technology, some kind of law of technology, then that really raises some pretty complicated questions for where we end up over the next few decades.

Mustafa Suleyman (Courtesy Hiltzik Strategies)

Jamali: Talk to me about that timeline. In your view, what’s happening in the next 10 years or so?

Suleyman: Let’s try to be specific about the capabilities that would be concerning. So, if you explicitly design an AI to recursively self-improve, that is, that it has the power to modify and optimize its own code, then you’re sort of closing the loop on its own agency or behavior and taking a human out of the loop. As these models get more and more widely available in open source, more people will be able to train really powerful AI models. Today, only 20 organizations in the world can do that, but if in the next decade, 200 million people can actually train these models, which is what is likely or even inevitable given the exponential reduction in the cost of compute, then somebody is going to take that risk of tinkering and experimenting in a way that is potentially dangerous, that might cause harmful effects as a result of a recursively self-improving AI. That’s the kind of thing that I think we’re all concerned about.

Jamali: You write that containment of new technologies has always failed eventually, but nuclear weapons and nuclear technology seem to be something of an exception to that rule. Can you explain that?

Suleyman: Nuclear is an exception in the sense that there really are only a few nuclear powers in the world today. In fact, the number of nuclear powers has gone down from 11 to seven. We’ve basically spent the last 70 years reducing nuclear stockpiles, monitoring the movement of all uranium enrichment facilities and very carefully licensing and restricting access to knowledge of those kinds of materials and so on. In some ways, it’s a great achievement, but unfortunately, it’s quite different from artificial intelligence and synthetic biology today. Nuclear is extremely expensive to produce, they’re very complicated, they involve getting access to and handling very dangerous radioactive materials. That’s quite unlike the nature of AI software, which is increasingly cheap and readily available and accessible by millions of people.

Jamali: Some people may be familiar with the Turing test. This is a test created by computer scientist Alan Turing in 1950, and it’s meant to evaluate the intelligence of a computer by testing its written conversation abilities. Basically, if a human can’t tell if they’re having a conversation with a computer or with another human, we’d say that the computer has passed the test. In 2023, is this test still meaningful?

Suleyman: That’s a question I explore in the book because now that we have AIs that are pretty much as good as a lot of humans at natural conversation, it’s not clear whether we’re any closer to knowing whether or not they’re intelligent. And so, the initial goal of the Turing test was to measure intelligence, but it turns out that what an AI can say isn’t necessarily correlated with whether it’s intelligent. So, another take on this, which I think is actually more revealing and more helpful, is to try to measure what an AI can do and to instead focus on capabilities.

Jamali: You propose a “modern Turing test” in your book. What do you mean by that phrase?

Suleyman: The modern Turing test that I’ve proposed is to give an AI a very general high-level goal, for example, you would tell it, “With $100,000 investment, go and make $1 million over the course of a few months.” The AI might interpret that goal by saying, I’m going to invent a new type of product and I’m going to research online to see what people like, what they don’t like, what they might be interested in. Then I’m going to contact a manufacturer, perhaps over in China, for my new product and I’m going to negotiate the price and the details, the blueprint of that product. Then I’m going to get it dropshipped and sell it on Amazon or online somewhere. Then I’m going to try and create marketing materials around that. All of that is clearly possible just with digital tools today, but it would require a lot of human intervention to do that, but it is increasingly possible that the entire thing might be done autonomously end to end, albeit maybe with a little bit of intervention where there are legal requirements. The goal here is not necessarily to make money, it’s just to take advantage of the dollar as a unit of measurement of progress over some time period. If a system was capable of doing this kind of task, then we could start to understand what the implications would be for work in the future and for how power will proliferate. Because if you have access to one of these tools, then suddenly, you know, you’re capable of doing much, much, much more with less, and that changes the power landscape.

Jamali: I remember you calling yourself a default optimist at some point in the book, and towards the end you write that you were originally planning to write a more positive book about AI. But then your perspective changed. What caused that change?

Suleyman: I think the thing that has made me more concerned is, when I started doing the research for the book and I looked back at the history of containment, there really aren’t very many examples where we’ve said no to a technology. I spent most of the first third of the book incredibly optimistic and enamored by technology. I love technology. I’m a creator and a builder and a maker and it inspires me every day to make things and do things. So, it’s a hard realization to also accept that technology is getting smaller and more powerful at the same time. When you roll that out for 10 or 20 years, it just opens this fundamental question of what these models will look like in the future. What does it mean that we will be able to engineer synthetic life? So introducing friction into that process and introducing human oversight and traditional governance is the way that we can make sure we have the best chance of making it accountable to democratic governments and to the public interest in general.

More on this

Something tends to happen as companies race to dominate an emerging technology. They compete, but once regulators get involved, they cooperate. AI is no different.

In July, seven leading U.S.-based AI companies met with President Joe Biden and agreed to safeguards to help manage the risks of new AI tools. Mustafa Suleyman was part of the meeting and explained to me that the involved companies agreed to audit their model, try to break them and then share with one another the best practices discovered in the process. But here’s the catch: That commitment is voluntary for now. But it’s a step Suleyman said was “appropriate for the moment.”

There are critics who say these voluntary commitments are really just a way for AI companies to write their own rules. But ultimately, that’s a job for Congress, whose members have entertained us with memorable displays of ignorance about tech over the years.

That could change, though. Senate Majority Leader Chuck Schumer recently floated a plan to convene a panel of experts to give lawmakers an AI crash course.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer