Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
Marketplace Tech Blogs

An argument for algorithms that reflect our highest ideals

Molly Wood Oct 31, 2018
HTML EMBED:
COPY
An AI cancer detection microscope by Google is seen during the World Artificial Intelligence Conference 2018 (WAIC 2018) in Shanghai on September 18, 2018. STR/AFP/Getty Images
Marketplace Tech Blogs

An argument for algorithms that reflect our highest ideals

Molly Wood Oct 31, 2018
An AI cancer detection microscope by Google is seen during the World Artificial Intelligence Conference 2018 (WAIC 2018) in Shanghai on September 18, 2018. STR/AFP/Getty Images
HTML EMBED:
COPY

Earlier this week, Google announced $25 million in grants for organizations working on “artificial intelligence for social good,” things like wildlife conservation, stopping sex trafficking and eliminating biases in algorithms that perpetuate racism and gender discrimination. It’s an admission that algorithms and AI are not neutral, and more care must be taken with their design. Jamie Susskind is a lawyer and author of the new book “Future Politics: Living Together in a World Transformed by Tech.” He told Molly Wood that we can design algorithms that reflect our highest ideals. The following is an edited transcript of their conversation.

Jamie Susskind: We are building these systems. They are controlled by humans, they are owned by humans, they are designed by humans. It is not beyond the ken of Google to make a system that doesn’t come up with criminal background checks every time you search for the name of someone who sounds African-American. Because it may well be that plenty of people have clicked on that in the past. But plenty of people can be kind of racist. So why would you build a system that replicates that rather than one which actively counters it?

Molly Wood: It gets back to the “who decides” question though, right? You are asking humans to make value judgments in order to create better algorithms. And some of those value judgments might be obvious. But, you know, we don’t agree on much, apparently, anymore, ever.

Susskind: I completely agree with you. But let’s just break this down. Just looking back in the grand sweep of human history, there have always been contested values. But what humans have struggled for for thousands of years is for some degree of oversight, or at the very least some degree of transparency, about the decision-making bodies that affect our lives. And in the past they may have been kings or conquerors and today parliaments. And in the future they’ll be increasingly those who write the code that shapes our day-to-day lives. I look forward to a time where we can debate in good faith the political implications of Google’s algorithm or Twitter’s algorithm. But the truth is, most of it’s kept secret just now. And if we can get to the latter world, the one that I just described, I think it will be an improvement on today’s. Because at the very least, then we can try to agree that the purpose shouldn’t just be the enrichment of the tech firm, but rather the common good as a whole.

Wood: It feels like when you look at all of this, the distorting force here is going to be capitalism. Right? The reason not to make the algorithms transparent is going to be competition. And that seems to be a hard force to overcome.

Susskind: And that’s a key argument in my book. What we have is systems that are being generated, developed and deployed pursuant to the logic of capitalism, which is fine. But actually their consequences are kind of political. And the logic of politics would say, actually, they should be more used for the common good, they should be regulated and overseen. Whereas the logic of capitalism says they should be used to compete and they should be used to enrich.


And now for Related Links, where Molly shares her thoughts on the other tech news she’s thinking about:

If we are going to have a conversation about ethics in technology, we really need to talk about apps for kids. Because apparently the development of apps for kids exists in an ethical gray zone that’s somewhere between clubbing baby seals and handing out raisins on Halloween. There’s the part where they’re designed to be addictive. There’s the part where they’re jammed with in-app payments so kids and even adults accidentally end up spending a car payment buying extra coins or doohickeys to keep on playing. And there’s the part where thousands of Android apps might be illegally collecting data on little kids despite federal law that says you can’t collect data on kids younger than 13.

And then, according to a great piece by Nellie Bowles in the New York Times on Tuesday, there’s the part where they are jammed full of ads, ads where the host of the program pressures kids to buy stuff and then cries when they don’t. We’re talking about apps designed for kids age 5 and under. And what’s extra disturbing is that the wealthy techies who contributed so much to this ecosystem now realize how damaging it can be and are restricting its use in their homes and their schools, leaving everyone else to let tablets crammed with ads babysit their kids. The application of ethics to technology is going to be a very long historical arc.

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.