Not all AI is, well, AI
Jan 2, 2025

Not all AI is, well, AI

HTML EMBED:
COPY
Arvind Narayanan, author of the book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference," explores how not all forms of this technology are, in fact, AI, plus how lawmakers should approach regulating it.

Artificial intelligence and promises about the tech are everywhere these days. But excitement about genuine advances can easily veer into hype, according to Arvind Narayanan, computer science professor at Princeton who along with PhD candidate Sayash Kapoor wrote the book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.”

He says even the term AI doesn’t always mean what you think. The following is an edited transcript of his conversation with Marketplace’s Meghan McCarty Carino:

Arvind Narayanan: AI is an umbrella term. It refers to a collection of loosely-related technologies. So many different products are called AI. In some ways, AI has certainly made remarkable progress. But in other cases, what is being sold as AI, first of all is 100-year-old statistics, simple formulas that are being rebranded as AI; but more importantly, it’s being used in situations where we should not expect AI or any other technology to work, like trying to predict a person’s future. So in the criminal justice system, AI is used to predict who might commit another crime if they are released before their trial, and if they’re deemed too risky because the algorithm predicts they might commit another crime, then they are detained. And you know that could be months or years until their trial. And that’s just one example. It’s used in the health system. It’s used in automated hiring. So in so many cases, these kinds of predictive technologies, they can pick up some basic statistical patterns in the data, that’s true, but they are not at the level of predictivity where we think it’s morally justified to be making these incredibly consequential decisions about people.

Meghan McCarty Carino: Let’s talk about predictive AI, or automated decision making, where you kind of have your most potent criticisms. These are systems that use machine learning to try to predict, say, who is most worthy of a bank loan or who would be the best fit for a job or who is likely to commit crimes. What makes these applications of AI so problematic?

Narayanan: So let me make one small distinction here. Yes, these are all areas where predictive AI is being used. But, look, I mean, banks have to determine who is more risky and who is less risky. And if they make no distinctions between applicants, they will probably go out of business. So in some cases, yeah, we have to apply some sort of prediction, but in other cases, it’s really not clear to us why we’re treating this as a prediction problem. In hiring, the way we used to do hiring is we insist on certain minimum qualifications and then we interview the person, and we come to a nuanced human understanding of how well the things they bring into this job will contribute to what we want them to do. And that’s really hard to turn into a pure prediction problem in the way that machine learning can handle it, because how well the person will perform is not merely a function of whatever they’ve set in their resume, but really a bunch of factors that relate to both the candidate as well as the environment into which they’re going to be placed. When we look at the research, a big part of the reason why candidates might often under perform might have to do with their manager.

And so when we ignore all that and we try to turn this into a pure prediction problem, first of all, there doesn’t seem to be that much predictability in the data. And secondly, the experience that we put these job seekers through — being interviewed by a robot, essentially — I think it’s kind of a violation of basic dignity, the ability for that job seeker to explain to a person why they bring something valuable to this job, why they’re deserving of this job. And I think when we forget that, we lose something really essential. And this is, I think, a kind of algorithmic injustice that goes beyond bias and discrimination. So even if it has the same error rate for all demographic groups, our point in the book is that it’s not fair to anybody.

McCarty Carino: Are there certain use cases or conditions where you think skepticism is especially warranted when we talk about generative AI?

Narayanan: Definitely. So while this is a powerful technology, we should always be thinking about: are the things we’re applying it to even technology problems, problems where technology can possibly be the solution? And one good example of this was this story out of Wyoming, where in Cheyenne, the capital, there was someone who was running for a bot to be the mayor, and the bot just seems to be ChatGPT behind the scenes, but he calls it VIC (virtual integrated citizen), which sounds more sophisticated. And he says he wants the bot to be making decisions, all of the decisions that a mayor would make, not just mundane stuff like how much to spend on infrastructure, but that’s also much more controversial things like decisions about book bans. And he talks about [how] the bot can be more accurate than a person, and its IQ is 155 and so on. I don’t think IQ is a valid measure for how well a bot works, but let’s set that aside.

The bigger point is that accuracy is not even the relevant question here. I mean, what does it mean to have a bot for a mayor? The reason we have political processes is because politics is the way we’ve chosen is the venue for resolving our deepest societal differences. So these disagreements, while they might get heated sometimes, being able to work them out is the whole point of politics. And to try to automate that is to miss the point. And to make it very concrete to our listeners: whatever your views are on book bans, imagine that this ChatGPT mayor is making a decision and through whatever “objective method,” it spit out a decision that’s not the one that you agree with. Will you accept that decision simply because it’s this supposedly unbiased bot? You know, presumably not. And I think that example just shows why this is not the kind of thing we should even be trying to automate.

McCarty Carino: Something we have heard a lot about over the last couple of years, and particularly from experts within the AI community, is the potential for an AI apocalypse. You know, the idea that artificial general intelligence is just around the corner. This is AGI capable as a human in pretty much every way. Is this AI snake oil in your view.

Narayanan: Yeah, so thinking about large scale risks from AI is definitely important. And there’s an AI safety research community. We ourselves do some AI safety research. I’m glad that that’s happening, but that’s different from the view that this is imminent and urgent and we need to take extreme action to ward off this threat. So for instance, policymakers should get together and through presumably unprecedented international cooperation, should put the brakes on AI. So those are the kinds of policy proposals that result if we see this as an imminent and urgent threat. And for that, we’re asking, where’s the evidence? This is not how we usually do policy.

McCarty Carino: Yeah, and when it comes to approaches to regulation, you kind of talk in the book about the need to be specific about harms. So many of these technologies that are around now have very general purposes. They can be used for many benign purposes. They could potentially be used for malicious purposes. How do you regulate a technology like that?

Narayanan: So there is some regulation that’s definitely needed at the level of the technology. But I think a lot of it has to be at the level of the use of technology. I mean, generative AI is as general purpose as computers are. You know, we can’t prevent computers to be made safe in the sense that they can’t be used by bad actors, unless we sell devices that are so close down that only government approved apps can be installed on them. So that’s the trade off — if we want to try to solve the entire problem at the level of the technology, that’s the kind of almost authoritarian regulation that you need. And if we’re not going to do that, then, unfortunately, it’s going to be harder, but a lot of it has to be at the level of the use of the technology. And we shouldn’t also forget that while it’s true that, for instance, bio terrorists might take advantage of AI, it’s not that AI is enabling them to do something that they couldn’t otherwise have done. Again, AI is useful to every knowledge worker. It helps all of us do certain things, I don’t know, 10% faster.

And in that same way, it’s going to help bad actors as well. But that’s not an AI problem. The fact that pathogens can be created in the lab, and they could potentially even cause pandemics, that’s a threat that we have been living with for very long, and we have not adequately acted against that threat. And so maybe this moment of worries about AI is going to give added urgency to policymakers to put into place policies that are about better pandemic prevention and response. And so if that happens, that’s a good thing. But if we treat it as an AI problem and try to put AI back in the box, first of all, it’s not going to work. And secondly, we will have done nothing about the fact that we are living with these threats that can be accomplished even without the help of AI.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer