When a senior is ill, can an algorithm decide length of care?
Apr 22, 2024

When a senior is ill, can an algorithm decide length of care?

HTML EMBED:
COPY
According to an investigation from Stat News, Medicare Advantage is using algorithms to predict care time, and millions have been affected by the projections. Casey Ross discusses the problem and reform efforts.

Artificial intelligence has become a big part of medicine — reading images, formulating treatment plans and developing drugs. But a recent investigation by Stat News found that some insurers overrely on an algorithm to make coverage decisions for seniors on Medicare Advantage, a Medicare plan offered by private insurers.

Marketplace’s Meghan McCarty Carino spoke with Casey Ross, who co-reported the story. He said an algorithm predicted how long patients needed care and coverage was curtailed to fit that calculation.

The following is an edited transcript of their conversation.

Casey Ross: Based on this projected length of stay by an algorithm, patients in the facility were then getting coverage denied in concert with that projected length of stay, regardless of their actual condition based on the assessment of their in-person providers. And so this algorithm, which was purchased by UnitedHealth Group in 2021, started getting applied to patients in nursing homes all over the country, and the provider started to become aware of it. But the patients had no idea and no ability to sort of question the algorithm or the decisions that were being made.

Meghan McCarty Carino: And in your story, you write about the case of Frances Walter. Can you tell me more about that?

Ross: Frances Walter was in her 80s when she suffered a fall and she broke her shoulder. She had her shoulder surgically repaired, and then she was transferred to a skilled-nursing facility to recover. And the algorithm that UnitedHealth Group owns under its NaviHealth subsidiary was then run on Frances Walter. And the algorithm projected that she would take about 16.6 days to recover. But Frances Walter was a complicated patient: She had an allergy to pain medicines, and she took longer to recover. But the algorithm had suggested that date and that’s the date that payment for her care was cut off. And her family wasn’t prepared for that. She lived alone, and so she had to leave the nursing facility, and what happens at that point to these families and what happened for the Walter family was they had to decide, do we pay out of pocket to keep her there, or what do we do? And so Frances Walter is then made to enroll in Medicaid, and she has to pay down her life savings to pay for extra care. She enrolls in Medicaid to continue to get the coverage. And then she files an appeal. And it was about a year and a half later that a judge finally found that, yes, in fact, the Walter family and her providers were right. She was entitled to more care, and she was eventually reimbursed. About 0.2% of the people that this happens to appeal and achieve that outcome.

McCarty Carino: How did these systems become so common in Medicare Advantage?

Ross: I think the insurers and Medicare Advantage saw an opportunity to say, “Hey, the math based on patients like you says this. And therefore, we’re going to make that the standard.”

McCarty Carino: And this was sort of seen as a solution to a perceived problem of overbilling these sort of long-term care facilities?

Ross: Right, because there’s legitimately waste, like people go into these facilities, facilities do have an incentive to keep them there longer. And so the insurers were finding a way to fight back. Patients were routinely staying in nursing homes for 20, 25, 30 days or more. And they’re saying, “No, we’ve got the data, we’ve got the algorithm. They should be done with their care at such and such a date. And we’re going to cut off payment in concert with that.”

McCarty Carino: Having treatment denied by insurance, despite it being recommended by a patient’s care team is, unfortunately, nothing new in this country. What’s different about an algorithm doing it?

Ross: It can really perpetuate bias and bad decision-making on a massive scale. When it’s humans doing this on a sort of case-by-case basis, that’s one thing. But when it’s an algorithm that sits behind these physicians, and it’s based on data that’s been fed into it, those data can be biased and the decision-making gets skewed. And it’s not just five patients here or 10 patients there, it’s 20 million. It happens repeatedly to people who have no idea that this is being done and have absolutely no recourse to fight back.

McCarty Carino: So are there any legal guardrails for using technology like this in this kind of context?

Ross: Well, when we started writing about it, the Centers for Medicare & Medicaid Services was in the process of considering some new regulations, new rules, to require transparency in the use of these. So basically, what Medicare ended up coming down and saying, after our reporting, was that you can’t use your own proprietary algorithm to make these decisions. There has to be a human in the loop who’s looking at that and making an informed decision. It can’t just be that your system relies on the algorithm and you’re automating decisions out of that.

McCarty Carino: Have there been any moves forward to regulate this?

Ross: There have. The Office of the National Coordinator for Health [Information Technology], for instance, has recently passed a new rule to require transparency into algorithmic and AI tools that are used in the context of electronic health records. So that’s happening, and then the Centers for Medicare & Medicaid Services rule requiring similar transparency into the processes around algorithms used on the insurance side. But there’s really no enforcement mechanism.

McCarty Carino: You’ve also reported on how the Coalition for Health AI, which is an industry lobbying group, basically, is now working with federal regulators to kind of inform standards, in part because government doesn’t necessarily have the technical expertise to keep up with this, right?

Ross: Yeah. There’s a broad coalition, there are a bunch of these groups that have sort of convened — technology companies, health systems, government regulators as well — to sort of say, “All right, we’ll come up with what the standards are going to be for testing these systems.” The Coalition for Health AI is proposing to create a network of laboratories across the country that would sort of stress test algorithms prior to their deployment to define their performance characteristics and their accuracy, so that providers and payers and others in the public can sort of assess that. So I think all of those are positive steps forward. All of that’s beginning to percolate. But we don’t yet know how those standards are going to come down and whether they’re going to really protect consumers or they’re going to more protect the interests of the makers of the algorithms and the hospitals using them.

More on this

Last year on the show, we spoke with New York University professor Meredith Broussard about her book “More Than a Glitch” and how AI and algorithmic decision-making in several high-stakes domains could worsen inequality.

She did a deep dive into the role of AI in medicine after she herself was diagnosed with breast cancer, in part by an AI-assisted imaging system. And she warned against a kind of bias she said humans are often guilty of: technochauvinism, or the impulse to see technological solutions as more objective or generally superior. That happens even though AI often encodes and magnifies human biases or falls prey to errors a human mind might never make.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer