Patient education materials get boost in readability from generative AI

Many patient education materials are written above the average reading level, making them difficult to understand. New research suggests that generative artificial intelligence can make these materials more patient friendly. 

Ideally, educational pamphlets for patients should be written at a sixth grade reading level, according to the American Medical Association. However, many of these materials are written at a much higher reading level—often 11th grade and up—and contain information and medical jargon that patients may perceive as confusing.  

This can be especially problematic when educational materials or reports contain information related to a diagnosis or recommendations for additional exams or treatment. Generative AI can help, authors of the new Clinical Radiology paper propose. 

“The advent of advanced artificial intelligence and natural language processing technologies, such as ChatGPT-4 by OpenAI and Google Gemini, offers a novel approach to tackling this problem,” corresponding author Mitul Gupta, from the University of Texas at Austin’s Dell Medical School, and colleagues noted. “These tools have the capacity to transform complex information into versions that are at a sixth grade reading level.” 

The team tested two generative AI tools—GPT-4 and Google Gemini—to see how well they could adapt radiology related materials to be more patient friendly. They selected seven current pamphlets from a large radiology practice for the large language models to reformulate, assessing their reading levels before and after they were adjusted. 

Three radiologists reviewed the reframed materials for appropriateness, relevance and clarity. The original paphlets were written at an average reading level of 11.72.  

ChatGPT reduced the word count by around 15%, with 95% of the papers retaining 75% of their vital information; Gemini dropped the count more substantially, at 33%, maintaining 75% of the necessary content in 68% of the pamphlets. 

Both LLMs were able to adjust the materials to reach an average reading level of sixth and seventh grade. ChatGPT outperformed Gemini in terms of appropriateness (95% vs. 57%), clarity (92% vs. 67%) and relevance (95% vs. 76%). Interrater agreement also was significantly better for ChatGPT. 

“Unlike traditional methods, which involve manual rewriting, our AI-driven approach potentially offers a more efficient and scalable solution,” the authors suggested. “While limitations exist, the potential benefits to patient education and healthcare outcomes are significant, warranting further investigation and integration of these AI tools in radiology and potentially across various medical specialties.” 

The group added that further refinement specific to informative medical materials could improve the LLMs’ performances. 

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The newly cleared offering, AutoChamber, was designed with opportunistic screening in mind. It can evaluate many different kinds of CT images, including those originally gathered to screen patients for lung cancer. 

Trimed Popup
Trimed Popup