

A collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center, and the University of Virginia published their findings in JAMA Network Open. They investigated how well doctors used GPT-4, an artificial intelligence (AI) large language model system, for patient diagnosis.
GPT- 4 Study
The study included 50 U.S.-licensed family, internal, and emergency medicine physicians. The research team discovered that providing GPT-4 to physicians as a diagnostic assistance did not significantly improve clinical reasoning when compared to conventional resources. Other major discoveries are:
- GPT-4 alone produced much higher diagnostic performance scores, outperforming physicians utilizing traditional diagnostic web tools and clinicians supported by GPT-4.
- When comparing doctors who used GPT-4 to those who used standard diagnostic resources, there was no significant improvement in diagnostic performance.
“The field of AI is expanding rapidly and impacting our lives inside and outside of medicine. It is important that we study these tools and understand how we best use them to improve the care we provide as well as the experience of providing it,” said Andrew Olson, MD, a professor at the U of M Medical School and hospitalist with M Health Fairview.
This study suggests that there are opportunities for further improvement in physician-AI collaboration in clinical practice.”
Andrew Olson, MD, Professor, University of Minnesota Medical School
These findings highlight the complexities of integrating AI into clinical practice. While GPT-4 alone produced promising outcomes, combining GPT-4 with physicians did not significantly exceed the utilization of traditional diagnostic resources. This implies a nuanced potential for AI in healthcare, underlining the need for additional research on how AI might effectively help clinical practice. Further research is needed to determine how clinicians should be trained to use these tools.
For more information: Goh, E., et al. (2024) Large Language Model Influence on Diagnostic Reasoning. JAMA Network Open. doi.org/10.1001/jamanetworkopen.2024.40969.
more recommended stories
Selective Attention Is Exclusively Cortical in Humans
Selective Attention: New Insights from the.
New Study Connects Traumatic Brain Injury to Dementia
Understanding the Hidden Burden of Traumatic.
Innovative AI Boosts Epilepsy Seizure Prediction by 44%
Transforming Seizure Prediction in Epilepsy Seizure.
Air Pollution Raises Risks for Sleep Apnea Patients
Air Pollution Significantly Increases Sleep Apnea.
Hypnosis Boosts NIV Tolerance in Respiratory Failure
A New Approach: Hypnosis Improves NIV.
Plant-Based Pet Food Cuts Carbon Footprint – Study finds
The Growing Environmental Burden of Pet.
Biomarkers: The Future of Liver Transplant Care
Enhancing Patient Care Through Biomarkers More.
Magnetic Nanorobots Enhance Tumor Drug Delivery
Cancer remains one of the leading.
Hospital Meals Strategy Promotes Nutrition and Sustainability
A recent UK study has revealed.
Naloxone Sales Show Initial Surge, Then Decline
Limited uptake raises concerns about accessibility.
Leave a Comment