Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
How artificial intelligence could influence hospital triage
Jan 12, 2022

How artificial intelligence could influence hospital triage

HTML EMBED:
COPY
As the latest virus surge swamps hospitals, researchers consider the role machine learning could play in making decisions about care.

The latest surge of COVID infections has hospitals crowded, short-staffed and, in some cases, rationing care. That means, sometimes, that hospital clinicians have to go through a triage process to prioritize who gets care first, or at all.

For example, a doctor may decide that a patient suffering respiratory failure should be admitted to the intensive-care unit over someone who seems to have minor injuries from a car accident. But that distinction, especially in a crisis, might not be so clear-cut.

So medical research centers like Johns Hopkins and Stanford are studying how machine learning might help.

Dr. Ron Li is a clinical assistant professor at Stanford Medicine, where he’s medical informatics director for digital health and artificial intelligence clinical integration. The following is an edited transcript of our conversation.

Ron Li: I think, more and more, what we’re starting to see people explore is the use of technology to start to make or at least help make decisions with the data that has already been gathered. And this is where we’re starting to look into the realm of artificial intelligence, machine learning models. Now, do we only use algorithms for decision-making such as triage? Usually not. I mean, that’s why we have clinicians who will follow algorithms to an extent but then when the situation deviates, and usually it does, from what the algorithm is designed for, then that’s really where the human expert has to come in and then make a decision.

Dr. Ron Li (courtesy Stanford Medicine)

Kimberly Adams: With so many health care workers out sick, quarantining, has it changed how the medical profession is considering deploying this sort of computer-generated or computer-assisted triage strategy?

Li: So I don’t know if health systems are necessarily thinking, you know, that’s going to be the solution. However, I think it’s a good reason to start exploring how these algorithms can fit into existing workflows and support clinicians, physicians and nurses. So think of the algorithm as more of a teammate, if you will, rather than a replacement of some of this work.

Adams: Throughout the pandemic, there have been concerns and legitimate fears about some of the institutional biases against, say, people with disabilities or people of color, when it comes to triage. How do you fix that problem, if maybe even the human-centered standard of care doesn’t adequately address it?

Li: So if, in the past, there’s already bias, if certain groups of people were excluded from care, then the algorithm is going to learn from that prior experience and most likely will amplify these prior biased experiences. But I think the other thing that we should really think about is, let’s say we have algorithms that are incorporated into workflows and you have clinicians and physicians see not just lab results and vital signs and not just talk to patients — where they’re gonna see risk scores coming from AI systems. What is the interplay between digesting an AI-generated risk score and then putting that together with what they heard from the patient? How does that all work out? I don’t think we quite understand how that works yet because there’s so few situations where we have AI models being used in the real world.

Related links: More insight from Kimberly Adams

You can read more about Li and Stanford’s pilot program for using machine learning as part of triage.

And at Johns Hopkins University, they’ve launched a system called TriageGO in three of their emergency departments. The system uses electronic health records, patient vital signs and machine learning to group patients by risk. Again, it is a limited program meant to assist clinicians, not replace them.

And, as I discussed with Li, there’s risk algorithms that can amplify preexisting medical biases.

Sasha Costanza-Chock, director of research & design at the Algorithmic Justice League, has a lengthy Twitter thread from April 2020 about how non-computer-generated algorithms for managing emergency triage can reinforce ableism and racism in health care.

It’s a particular concern when, as reported by The New York Times and others, the current COVID wave is forcing health officials and doctors with limited resources to sometimes decide who will get lifesaving treatments, and who won’t.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daniel Shin Producer
Jesús Alvarado Associate Producer