While industry experts in artificial intelligence (AI) lauded its promise in healthcare, lawmakers appeared skeptical during a hearing of the House Energy and Commerce Health Subcommittee on Wednesday.
One witness, Christopher Longhurst, MD, of UC San Diego Health, recalled the use of a new algorithm for detecting COVID-19 early in the pandemic.
In one case, a woman in the emergency department with cardiac symptoms underwent a chest x-ray and the algorithm indicated signs of early pneumonia, resulting in a test for COVID-19. The woman’s test came back positive, Longhurst said, but she was diagnosed early and went home safely.
“To me, that was a really great example of AI finding a signal that we would not have found otherwise as a human,” he said.
Other witnesses described how AI is used to enhance clinical decision-making, allow more personalized treatment, and reduce administrative burden, but members of congress — many of them clinicians — had lingering doubts.
Over-Reliance on Technology
Rep. Larry Bucshon, MD, (R-Ind.) a former cardiothoracic surgeon, said his adult children cannot navigate around the block without using Google Maps.
“I mean, they literally don’t know what direction they’re going,” Bucshon said, to hushed laughter. Bucshon said he worries that medical professionals will similarly become overly reliant on AI and that it will hinder their clinical decision-making skills.
He asked whether medical schools should educate students about both the “pros and cons” of AI in healthcare.
Benjamin Nguyen, MD, senior product manager for consumer healthcare app company Transcarent, acknowledged his struggle to navigate without Google Maps but agreed that academic institutions must focus on “the art and science of medicine.”
Still, AI can enhance learning through efficiencies that allow students to focus on the most important concepts, he argued.
Whether they’re trained to use AI or not, doctors will be using these technologies, Nguyen said. “So, the most important way to prevent over-reliance is to educate them on the limitations of that technology.”
Will Physicians Get Sued?
Rep. Diana Harshbarger, PharmD (R-Tenn.), asked about the intersection of AI and medical liability.
Longhurst noted that, like clinical decision-support tools that have been used for years, AI is another kind of tool and “ultimately the liability for treatment of a patient rests with the treating physician.”
Harshbarger then asked if he could envision “a scenario where litigation might increase if doctors don’t utilize AI.”
Longhurst responded by acknowledging another panelist — David Newman-Toker, MD, PhD, neurologist at Johns Hopkins University School of Medicine in Baltimore — whose remarks he echoed, stating that if AI tools are proven to decrease mortality and increase survivorship, “then they will become a best practice that should be used in every case.”
AI’s Impact on Physician Burnout
Rep. Kim Schrier, MD (D-Wash.), spoke about how after more than a decade of training post-college, physicians have been likened to “cogs in a wheel” and to “line workers.”
“We’re burning out,” said Schrier, a pediatrician. She asked how to prevent physicians from “becoming a check on a system where AI makes patient management decisions for them,” despite their education and expertise.
Longhurst said he’s “incredibly optimistic” about AI’s potential to help mitigate burnout and the “incredibly positive results” found in pilot projects using AI scribes. He acknowledged that these technologies are still “quite expensive,” but as they become more available, he said he believes they can help “remediate” this past decade of burnout.
Algorithms vs Patients
Rep. Robin Kelly (D-Ill.) raised concerns about flawed AI algorithms that have been misused to deny patients’ medical claims. (Family members of two deceased UnitedHealthcare enrollees sued the insurer in November for denying care doctors allege was medically necessary.)
Newman-Toker explained that the issue of insurers denying more claims to make more money and providers trying to increase their claims to get paid more is a long-standing issue that has bled over into the AI space.
While the focus of the hearing had been on AI that is being directly integrated into the healthcare space, Newman-Toker said this kind of AI exists in the periphery in an unregulated and “potentially dangerous” space, given how little is known about the systems managing the process of healthcare.
Direct-to-patient symptom checkers, which people alarmingly rely on for medical advice, also fall into this unregulated space, he noted.
“I do think we need to start bringing some of those [technologies] into the regulatory framework,” Newman-Toker said.
Perpetuation of Bias in Healthcare
Newman-Toker also warned the subcommittee about the critical need to train AI on appropriate data sources and to properly test algorithms.
“Put simply, if available electronic health record data sets are used to train AI systems, the best we can hope for is AI systems which replicate and formalize implicit human biases. And the worst we can expect is AI systems that are frequently wrong in their recommendations,” he said.
Rep. Ann McLane Kuster (D-N.H.) shared Newman-Toker’s concern about bias in AI and asked what solutions were needed.
“Gold standard datasets” are important to testing, Newman-Toker said, but to achieve those, “we actually have to do things in healthcare that we don’t normally do, such as … determine what actually happens to our patients downstream after an encounter.”
When a patient leaves a clinician and is given a diagnosis, clinicians don’t always get to follow-up. The patient could end up in a different health system, Newman-Toker noted. “So we have to start coordinating data architectures … [and] developing and curating good datasets that can be used at a large scale to train these AI models.”