What passes for artificial intelligence today is really a pattern-matching program, not unlike autocomplete in a text or email; it does not reason or understand what it is saying.
Still, the pattern-finding power of generative AIs like GPT could transform data-intensive fields such as health care. But lingering questions about safety, ethics, privacy, reliability and quality of care mean some find AI in medicine a tough pill to swallow.
A new paper in the journal PLOS Digital Health tries to gauge the public’s confidence in the technology.
“They’re concerned,” said lead author Dr. Marvin Slepian, Regents professor of medicine at the UA College of Medicine – Tucson and member of the BIO5 Institute of UA. “Who will be taking care of us in the future? Will it be the physician? Will it be the physician with AI helping? Or will it be some kind of computer-based AI system alone? This is the robotic kind of thing.”
To find out how comfortable people are with AI diagnosis and treatment, UA researchers surveyed almost 2,500 people.
Slepian said about 53% of participants said they aren’t convinced AI diagnoses are trustworthy.
“There’s a big group that thought, ‘Well, this may not be so bad.’ And then, on the other hand, there were a wide range of individuals that also felt, ‘Maybe this is a little bit dangerous,’” he said.
Hear Dr. Marvin Slepian's interview with Lauren Gilger on The Show
Slepian added that the genie is out of the bottle; the task moving forward is for doctors and engineers to make AI accurate.
The Show spoke with Slepian about just how AI works in medical diagnoses.