Managing AI Risks in Healthcare; GPT-4’s Clinical Biases; DIY AI Clinic

Derick Alison
Derick Alison
3 Min Read

Welcome to MedAI Roundup, highlighting the latest news and research in healthcare-related artificial intelligence each month.

Twenty-eight healthcare groups — including CVS Health, Duke Health, Houston Methodist, and Mass General Brigham — have signed on to President Joe Biden’s plan to manage the risks of artificial intelligence (AI) in healthcare, the White House announced.

Meanwhile, HHS released its final rule on updated certification programs and requirements for healthcare technology companies as they develop AI-powered tools.

The New England Journal of Medicine launched its anticipated new monthly peer-reviewed journal — NEJM AI — focused on clinical research and applications of AI.

More than a year after ChatGPT changed the conversation around generative AI, the technology has found applications in clinical research, including improving data management, monitoring trial participants, and analyzing study outcomes. (Healthcare Brew)

In one such example, adults with type 2 diabetes who were starting or adjusting insulin therapy saw significant improvements in the time required to achieve optimal dosage and overall adherence while using a voice-based conversational AI application, according to randomized clinical trial of 32 adults published in JAMA Network Open.

Additionally, AI-aided colonoscopies significantly enhanced detection of colorectal neoplasia and reduced miss rates for adenomatous polyps by as much as 50%, according to a meta-analysis of 33 randomized clinical trials published in The Lancet.

On the other hand, researchers from Brigham and Women’s Hospital in Boston found OpenAI’s GPT-4 failed to appropriately model demographic diversity for medical conditions, revealing a tendency toward racial and gender bias during clinical decision-making support, according to a study published in The Lancet Digital Health.

Despite cautious optimism among clinicians about its future, most patients (80%) said they would be concerned if their physician used generative AI for patient care, according to a survey by Wolters Kluwer Health. However, 86% of survey respondents said they would be more comfortable if they knew medical professionals were involved in developing the source material for the AI model.

On the business side, Google Cloud announced a new partnership with Augmedix, a healthcare technology company, that will allow it to start integrating Med-PaLM 2 — Google’s most powerful, medically-tuned AI model — into its ambient medical documentation products, according to a company press release.

Meanwhile, another healthcare technology company, Forward, is trying to bring the power of AI directly to patients using a do-it-yourself clinic — called the CarePod — that will allow patients to access preventive care services such as biometric body scans or testing for hypertension and cardiac conditions. (Axios)

  • author['full_name']

    Michael DePeau-Wilson is a reporter on MedPage Today’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news. Follow

Source link

Share this Article
Leave a comment
adbanner