Chat GPT Could Help Liver Cancer Patients

Chat GPT for Liver Cancer patients
Image by brgfx on Freepik

According to a recent study by Cedars-Sinai researchers, ChatGPT, an AI chatbot, may help patients with cirrhosis and liver cancer have better health outcomes by giving them simple-to-understand information about fundamental knowledge, living choices, and treatments for these diseases. The research shows how the AI system may be used in clinical practice and was published in the peer-reviewed journal Clinical and Molecular Hepatology.

“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease,” said Brennan Spiegel, MD, MSHS, director of Health Services Research at Cedars-Sinai and co-corresponding author of the study. “We found ChatGPT—while it has limitations—can help empower patients and improve health literacy for different populations.”

Patients diagnosed with liver cancer and cirrhosis, an end-stage liver disease that is also a major risk factor for the most common form of liver cancer, often require extensive treatment that can be complex and challenging to manage.

“The complexity of the care required for this patient population makes patient empowerment with knowledge about their disease crucial for optimal outcomes,” said Alexander Kuo, MD, medical director of Liver Transplantation Medicine at Cedars-Sinai and co-corresponding author of the study. “While there are currently online resources for patients and caregivers, the literature available is often lengthy and difficult for many to understand, highlighting the limited options for this group.”

Individualized instruction According to Kuo, AI models may contribute to better patient education and understanding. ChatGPT, which means generative pre-trained transformer, is one of them. It has rapidly gained popularity for the human-like text it produces in chatbot conversations, where users can enter any prompt and the chatbot will produce a response based on the data in its database. By producing elementary medical reports and accurately responding to test questions for medical students, it has already demonstrated some promise for medical professionals.

“ChatGPT has shown to be able to provide professional, yet highly comprehensible responses,” said Yee Hui Yeo, MD, first author of the study and a clinical fellow in the Karsh Division of Gastroenterology and Hepatology at Cedars-Sinai. “However, this is one of the first studies to examine the ability of ChatGPT to answer clinically oriented, disease-specific questions correctly and compare its performance to physicians and trainees.”

Investigators provided ChatGPT with 164 commonly asked questions organized into five categories to test ChatGPT’s knowledge of both cirrhosis and liver cancer. Two experts in liver transplantation separately graded the ChatGPT responses after that. Basic information, diagnosis, treatment, lifestyle, and preventive medicine were the categories used to group together the questions that were asked twice to ChatGPT. As a consequence of the study

  • About 77% of the questions were properly answered by ChatGPT, which also provided high levels of accuracy for 91 questions from various categories.
  • 75% of the answers for fundamental knowledge, treatment, and lifestyle were complete or correct but insufficient, according to the experts who graded the responses.
  • In terms of responses, there were 22% for basic information, 33% for diagnosis, 25% for treatment, 18% for lifestyle, and 50% for preventive medicine that contained “mixed with correct and incorrect data.”

The AI model also gave patients and caregivers helpful guidance about how to proceed when adjusting to a new diagnosis. However, the research proved beyond a shadow of a doubt that medical advice was preferable.

“While the model was able to demonstrate strong capability in the basic knowledge, lifestyle, and treatment domains, it suffered in the ability to provide tailored recommendations according to the region where the inquirer lived,” said Yeo. “This is most likely due to the varied recommendations in liver cancer surveillance interval and indications reported by different professional societies. But we are hopeful that it will be more accurate in addressing the questions according to the inquirers’ location.”

“More research is still needed to better examine the tool in patient education, but we believe ChatGPT to be a very useful adjunctive tool for physicians—not a replacement—but an adjunctive tool that provides access to reliable and accurate health information that is easy for many to understand,” Spiegel said. “We hope that this can help physicians to empower patients and improve health literacy for patients facing challenging conditions such as cirrhosis and liver cancer.”

Other Cedars-Sinai authors are Jamil Samaan, Hirsh Trivedi, Aarshi Vipani, Walid Ayoub, Ju Dong Yang, and Omer Liran.

Source Link

more recommended stories