KJZZ is a service of Rio Salado College, and Maricopa Community Colleges
Privacy Policy | FCC Public File | Contest Rules
Copyright © 2024 KJZZ/Rio Salado College/MCCCD
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

A prescription for better AI in medicine

Exam rooms and doctor’s offices are spaces of trust, where life-altering decisions can meet wallet-emptying treatments. Some might say, if we’re going to let AIs into that circle, then they’d better be darned near perfect.

But do we really know what we mean by “perfect?” After all, we don’t require human doctors to be right 100% of the time.

“They're held to the standard of reasonableness,” said bioethicist and AI expert Melissa McCradden of the Hospital for Sick Children in Toronto. "So they need to have good reasons behind any clinical decision or medical recommendation that that might be.”

Diagnoses are opinions based on knowledge, experience and instincts. They’re sometimes wrong, which is why it’s a bad idea to train an AI by just showing it a bunch of X-rays and diagnoses of, say, pneumonia.

“When you set up an algorithm to just replicate what a human is doing, you're also setting up that algorithm to replicate all of the human’s problems and, in some cases, exaggerate those problems,” said Dr. Ziad Obermeyer of the University of California, Berkeley School of Public Health.

You are also risking unwelcome human biases that can creep into every decision, from individual health assessments to so-called “pop health” strategies meant to lift whole populations.

“When we looked at the way that a lot of algorithms were working to prioritize patients for pop-health interventions, we found an enormous amount of racial bias,” said Obermeyer.

Avoiding pitfalls

Like a shady politician, AIs can draw false inferences from patterns in the data.

Obermeyer illustrated the point by describing AI-assisted scheduling systems, which might try to “optimize” primary care by canceling and rebooking patients who are more likely to be no-shows.

“But the patients who get disproportionately canceled are not just the patients who decide not to show up; they're the patients who can't show up because of barriers to access and transportation and cost,” he said.

Such examples invite a larger ethical question: Should we train these bold new pseudo-minds to reiterate the flawed status quo, or to realize loftier ambitions?

Consider the question of who does or doesn’t make it onto transplant lists.

“Knowing that there are issues with equity in terms of placement on the list, we might want to think about, ‘How we can treat all patients with similar clinical features the same?’” said McCradden.

The transplant process is fraught with biases involving income, race and ethnicity, age, lifestyle, comorbidities, even social stability.

Can we use AI to make it better?

“Where we might say there's a particular group that's underrepresented, what design choices could we make to our algorithm in order to improve their access to care in order to address that health disparity?” said McCradden.

Back to basics

Designers, developers and testers should know the answers to these kinds of questions before AIs are programmed or deployed. After all, if they don’t know what a tool is for, how can they know if it works? And is it using accepted methodologies?

“You know, what is the information that this algorithm is supposed to be producing? How would we know if this algorithm is right?” said Obermeyer.

Such questions are essential, as is the need to test AIs using data that are independent, reliable and suited to their roles.

“Data is not universal; even across one system, you're gonna have various quality issues, and those probably relate to lots of choices that propagate,” said Dr. Vincent Liu, a regional medical director with Kaiser Permanente.

Finally, the success or failure of even a properly designed, trained and tested AI will, like any tool, come down to how it’s used.

The human factor

“The funny thing is, you could have a perfect algorithm that fails completely because of this second stage; you could actually have an imperfect algorithm that is a smashing success because the system is deploying it effectively,” said Liu.

For the foreseeable future, that safe, effective deployment will mean keeping humans with years of training, experience and institutional knowledge in the driver’s seat.

McCradden says patients strongly support this model, in part because it lets them weigh in with their own perspectives and values.

“If the nurse who's interpreting that output knows exactly what to do, and proceeds with that, you can have an imperfect algorithm. You can have a range of performances, but if that nurse knows what to do, then you can effectuate patient care,” she said.

So in the short term, and properly handled, AIs might be as impactful — and perhaps painful — as the migration to electronic health records.

“These things are tools that doctors are going to use. And I think these tools are going to make doctors better at doing the things that doctors are already doing,” said Obermeyer.

Nothing succeeds or fails in isolation.

The question with AI medicine appears to be, can we find a way to integrate it into our healing gardens that will, like an automated sprinkler system, bring benefits that we can control and adapt to specific needs?

More stories from KJZZ

Nicholas Gerbis joined KJZZ’s Arizona Science Desk in 2016. A longtime science, health and technology journalist and editor, his extensive background in related nonprofit and science communications inform his reporting on Earth and space sciences, neuroscience and behavioral health, and bioscience/biotechnology.Apart from travel and three years in Delaware spent earning his master’s degree in physical geography (climatology), Gerbis has spent most of his life in Arizona. He also holds a master’s degree in journalism and mass communication from Arizona State University’s Cronkite School and a bachelor’s degree in geography (climatology/meteorology), also from ASU.Gerbis briefly “retired in reverse” and moved from Arizona to Wisconsin, where he taught science history and science-fiction film courses at University of Wisconsin-Eau Claire. He is glad to be back in the Valley and enjoys contributing to KJZZ’s Untold Arizona series.During the COVID-19 pandemic, Gerbis focused almost solely on coronavirus-related stories and analysis. In addition to reporting on the course of the disease and related research, he delved into deeper questions, such as the impact of shutdowns on science and medicine, the roots of vaccine reluctance and the policies that exacerbated the virus’s impact, particularly on vulnerable populations.