Patient education materials get boost in readability from generative AI

Many patient education materials are written above the average reading level, making them difficult to understand. New research suggests that generative artificial intelligence can make these materials more patient friendly. 

Ideally, educational pamphlets for patients should be written at a sixth grade reading level, according to the American Medical Association. However, many of these materials are written at a much higher reading level—often 11th grade and up—and contain information and medical jargon that patients may perceive as confusing.  

This can be especially problematic when educational materials or reports contain information related to a diagnosis or recommendations for additional exams or treatment. Generative AI can help, authors of the new Clinical Radiology paper propose. 

“The advent of advanced artificial intelligence and natural language processing technologies, such as ChatGPT-4 by OpenAI and Google Gemini, offers a novel approach to tackling this problem,” corresponding author Mitul Gupta, from the University of Texas at Austin’s Dell Medical School, and colleagues noted. “These tools have the capacity to transform complex information into versions that are at a sixth grade reading level.” 

The team tested two generative AI tools—GPT-4 and Google Gemini—to see how well they could adapt radiology related materials to be more patient friendly. They selected seven current pamphlets from a large radiology practice for the large language models to reformulate, assessing their reading levels before and after they were adjusted. 

Three radiologists reviewed the reframed materials for appropriateness, relevance and clarity. The original paphlets were written at an average reading level of 11.72.  

ChatGPT reduced the word count by around 15%, with 95% of the papers retaining 75% of their vital information; Gemini dropped the count more substantially, at 33%, maintaining 75% of the necessary content in 68% of the pamphlets. 

Both LLMs were able to adjust the materials to reach an average reading level of sixth and seventh grade. ChatGPT outperformed Gemini in terms of appropriateness (95% vs. 57%), clarity (92% vs. 67%) and relevance (95% vs. 76%). Interrater agreement also was significantly better for ChatGPT. 

“Unlike traditional methods, which involve manual rewriting, our AI-driven approach potentially offers a more efficient and scalable solution,” the authors suggested. “While limitations exist, the potential benefits to patient education and healthcare outcomes are significant, warranting further investigation and integration of these AI tools in radiology and potentially across various medical specialties.” 

The group added that further refinement specific to informative medical materials could improve the LLMs’ performances. 

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Around the web

Debra L. Monticciolo, MD, past president of both the Society of Breast Imaging and the American College of Radiology, explains the advantages and disadvantages of current breast screening technology.

The new guideline details the best imaging strategies for a variety of clinical scenarios. 

"We are on the edge of a new journey in nuclear cardiology," explained ASNC President-elect Panithaya Chareonthaitawee, MD.

 

Trimed Popup
Trimed Popup