A collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center, and the University of Virginia published their findings in JAMA Network Open. They investigated how well doctors used GPT-4, an artificial intelligence (AI) large language model system, for patient diagnosis.
GPT- 4 Study
The study included 50 U.S.-licensed family, internal, and emergency medicine physicians. The research team discovered that providing GPT-4 to physicians as a diagnostic assistance did not significantly improve clinical reasoning when compared to conventional resources. Other major discoveries are:
- GPT-4 alone produced much higher diagnostic performance scores, outperforming physicians utilizing traditional diagnostic web tools and clinicians supported by GPT-4.
- When comparing doctors who used GPT-4 to those who used standard diagnostic resources, there was no significant improvement in diagnostic performance.
“The field of AI is expanding rapidly and impacting our lives inside and outside of medicine. It is important that we study these tools and understand how we best use them to improve the care we provide as well as the experience of providing it,” said Andrew Olson, MD, a professor at the U of M Medical School and hospitalist with M Health Fairview.
This study suggests that there are opportunities for further improvement in physician-AI collaboration in clinical practice.”
Andrew Olson, MD, Professor, University of Minnesota Medical School
These findings highlight the complexities of integrating AI into clinical practice. While GPT-4 alone produced promising outcomes, combining GPT-4 with physicians did not significantly exceed the utilization of traditional diagnostic resources. This implies a nuanced potential for AI in healthcare, underlining the need for additional research on how AI might effectively help clinical practice. Further research is needed to determine how clinicians should be trained to use these tools.
For more information: Goh, E., et al. (2024) Large Language Model Influence on Diagnostic Reasoning. JAMA Network Open. doi.org/10.1001/jamanetworkopen.2024.40969.
more recommended stories
AI ECG Model Outperforms Standard STEMI TriageNovel AI ECG Model Outperforms Standard.
New Software Transforms Real-Time Pathogen SurveillanceReal-Time Pathogen Surveillance Software Transforms Environmental.
Bright Nights May Increase Stroke, Heart Failures in AdultsBright Nights are tied to increased.
Cannabis Use Linked to Regular Tobacco in US YouthCannabis Use and Tobacco Risk: A.
Mediterranean Diet Reduces Endometriosis Risk in WomenMediterranean Diet and Endometriosis: A Promising.
Night Shifts May Trigger Irritable Bowel Syndrome (IBS)Night Shifts and Digestive Health: Linking.
Blood test shows promise for faster ALS diagnosisSummary / Key Points A UCLA.
Caraway seed chemistry yields anticonvulsant leadsA team led by UNLV researchers.
WHO and EU Strengthen Digital Health in AfricaThe World Health Organization (WHO) and.
Quitting Smoking Slows Memory Decline, Study FindsQuitting smoking is linked to slower.

Leave a Comment