A collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center, and the University of Virginia published their findings in JAMA Network Open. They investigated how well doctors used GPT-4, an artificial intelligence (AI) large language model system, for patient diagnosis.
GPT- 4 Study
The study included 50 U.S.-licensed family, internal, and emergency medicine physicians. The research team discovered that providing GPT-4 to physicians as a diagnostic assistance did not significantly improve clinical reasoning when compared to conventional resources. Other major discoveries are:
- GPT-4 alone produced much higher diagnostic performance scores, outperforming physicians utilizing traditional diagnostic web tools and clinicians supported by GPT-4.
- When comparing doctors who used GPT-4 to those who used standard diagnostic resources, there was no significant improvement in diagnostic performance.
“The field of AI is expanding rapidly and impacting our lives inside and outside of medicine. It is important that we study these tools and understand how we best use them to improve the care we provide as well as the experience of providing it,” said Andrew Olson, MD, a professor at the U of M Medical School and hospitalist with M Health Fairview.
This study suggests that there are opportunities for further improvement in physician-AI collaboration in clinical practice.”
Andrew Olson, MD, Professor, University of Minnesota Medical School
These findings highlight the complexities of integrating AI into clinical practice. While GPT-4 alone produced promising outcomes, combining GPT-4 with physicians did not significantly exceed the utilization of traditional diagnostic resources. This implies a nuanced potential for AI in healthcare, underlining the need for additional research on how AI might effectively help clinical practice. Further research is needed to determine how clinicians should be trained to use these tools.
For more information: Goh, E., et al. (2024) Large Language Model Influence on Diagnostic Reasoning. JAMA Network Open. doi.org/10.1001/jamanetworkopen.2024.40969.
more recommended stories
Fat-Regulating Enzyme Offers New Target for ObesityKey Highlights (Quick Summary) Researchers identified.
Spatial Computing Explains How Brain Organizes CognitionKey Takeaways (Quick Summary) MIT researchers.
Gestational Diabetes Risk Identified by Blood MetabolitesKey Takeaways (Quick Summary for Clinicians).
Phage Therapy Study Reveals RNA-Based Infection ControlKey Takeaways (Quick Summary) Researchers uncovered.
Pelvic Floor Disorders: Treatable Yet Often IgnoredKey Takeaways (Quick Summary) Pelvic floor.
Urine-Based microRNA Aging Clock Predicts Biological AgeKey Takeaways (Quick Summary) Researchers developed.
Circadian Control of Neutrophils in Myocardial InfarctionKey Takeaways for HCPs Neutrophil activity.
E-Cigarette Use and Heart Attack Risk in Former SmokersKey Takeaways for Clinicians and Nurses.
High-Intensity Training and Oxidative Stress InsightsNew Evidence Linking High-Intensity Training and.
36-Week Pre-eclampsia Screening May Reduce Term RiskA New Preventive Strategy for Term.

Leave a Comment