Would you trust a cancer screening by artificial intelligence?
As consumers, we’ve all been subjected to the “upsell,” or pressure to pay a little more for a product that’s slightly better.
It’s one thing if you’re buying, say, a car or a piece of clothing. The ethical questions get a lot more complicated in health care.
Some providers have started integrating artificial intelligence in diagnostic procedures, including screenings for breast cancer. The tools may be available for an additional cost, and questions about their accuracy have been raised.
Marketplace’s Lily Jamali spoke with Meredith Broussard, a journalism professor at New York University, about integrating AI into mammograms and her personal experience grappling with the tech.
The following is an edited transcript of their conversation.
Meredith Broussard: A couple of years ago, I started looking into AI mammography for a project I was doing for my last book, “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.” And what I discovered was that, even though we’re excited about new kinds of AI and mammograms, actually people have been trying to detect cancer using AI and computer-assisted techniques since the 1990s. And it hasn’t been successful in all that time.
Lily Jamali: And in your book, in describing the experience that you had, you seemed, for lack of a better term or phrase, “weirded out” by having there be a role for an AI in the diagnosis that you got. What was the experience like for you?
Broussard: Well, I think that weirded out is a good description of how I felt. I should start by saying that I had breast cancer, and I was treated for it. And I am so grateful to the doctors and the medical professionals who took care of me. And then after my recovery, I started looking into the state of the art in AI-based cancer detection because I noticed something in my chart that said, “These films were read by an AI. And because I’m an AI researcher, I thought, “OK, whose AI was it? How effective was this AI? What did the AI find?” And I couldn’t get answers to any of these things. So what I decided to do was I decided to do what’s called a replication experiment. I was going to take my own scans and run them through an open-source AI in order to write about the state of the art in AI-based cancer detection. So the whole situation was just very, very strange to begin with. I felt uncomfortable about AI reading my scans because I didn’t have enough information.
Jamali: And because you hadn’t consented to that, right?
Broussard: Exactly. That is something that people are really trying to wrestle with because the AI technology is developed using images that people have consented to, right? So the particular hospital that I went to hadn’t told me before I got the mammogram that there would be AI involved in reading my scans. And I should also say that the AI was not alone in reading my scans. So the way that it works almost all the time is that a doctor reads the scans first. And then after the doctor submits their diagnosis, then they get access to the AI’s results.
Jamali: Based on all the research that you did, do you now have a pretty good sense, do you think, of how common or uncommon AI integration is when it comes to these very important, potentially life-changing diagnoses?
Broussard: So we tend to think about technology as being this landscape that has lots and lots of players. But actually, the opposite is true. There are a few major players. So there are a few big tech companies and then in the world of medical imaging, there are a few big companies. And yes, all of those big companies are trying to do something with AI. But whether they’re being successful or not, well, the jury’s still out. When you really start reading the studies and looking at the accuracy of these systems, I was not personally confident enough that I would want to put my fate in the hands of these algorithms.
Jamali: And you spoke to a number of doctors about this as well. What do they have to say about the way that AI is being used in this context as a health care service for women, whether it’s augmenting the work that they’re doing or in some cases, maybe even replacing it, maybe down the road?
Broussard: Well, I think there there are two things to think about. One is the money and one is the workflow. So in terms of workflow, the doctors get the AI evaluation after they have entered into the computer system their own evaluation. So the AI currently is an extra step for the doctors to go through. And doctors have really different opinions about electronic medical records and about digital steps in their workflows. The most interesting study I read looked at doctors in different disciplines and how they felt about the AI in their discipline. And so the breast cancer doctors thought that the AI was useless. They were not impressed with looking at the AI’s results. They did not feel like it helped them in their diagnosis. And the lung cancer doctors really liked it. They were like, “Oh yeah, the AI validates what I already thought was true.” So that’s really interesting, right? We tend to think about AI as a monolith, but really it depends on the kind of AI. It depends on how good the AI is. It depends on the discipline. And it depends on the individual doctor.
Jamali: Any sense of why a certain specialization, like the lung cancer doctors, might be more comfortable or put more faith in the technology, the way that it’s being used in their specialty, compared to the others?
Broussard: There’s another really interesting study where they took AI and ran it on scans at three different hospitals. And the AI was extremely accurate at each hospital. But then when they took the model that had been trained at hospital A and ran it on the scans at hospital B, the results fell apart. The AI was no longer accurate. Training an AI on images from one site and then running through scans from a different site gives you different levels of accuracy. It could be something as small as where the label for the scan is on the medical image, right? The AI could be picking up on something that the human eye just does not notice. So it really shouldn’t surprise us that image-recognition systems in cancer diagnosis or in medical diagnosis, that these systems are just as fragile as facial-recognition systems.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.