Do large language models help or hinder workflows related to radiology reports?
Most radiologists agree patients should have access to their imaging reports, but many are concerned the terminology will create confusion. Some experts have suggested that large language models could help reword reports in a way patients can easily understand.
But whether this is a feasible option and if/how it could improve workflows has yet to be determined.
A new analysis in the journal Current Problems in Diagnostic Radiology dives into the potential of LLMs to translate radiology reports into lay language, detailing how radiologists feel about tapping into AI assistance to help patients better understand their findings. While many of the radiologists surveyed for the paper agree that LLMs have great potential, very few feel like they’re ready to fly solo.
“While other care providers remain the primary audience for imaging reports, patients are now becoming frequent readers,” corresponding author Howard P. Forman, MD, MBA, with the Department of Radiology and Biomedical Imaging at Yale School of Medicine, and colleagues noted. “Despite this shift, these reports remain overly complex, with medical jargon posing a significant barrier.”
Previous studies have exhibited the ability of LLMs to reliably translate medical terminology into lay language, but none have examined how radiologists feel about the concept. To get a better idea of rads’ perceptions on deploying the technology, researchers distributed an eight-question survey to all interventional and diagnostic radiologists and clinical fellows at the Yale School of Medicine.
Of the 52 respondents, nearly 53% agreed or strongly agreed that patients should have immediate access to their radiology reports, though the majority (over 90%) acknowledged that said reports are not easily digestible by the average patient. Just under half of the group indicated support for allowing LLMs or AI assistance to translate reports into more easily understandable language, with the caveat that final reports undergo a manual check by radiologists. Without radiologist review, however, support dropped to 23%.
“Given significant responsibilities already, there is limited incentive for providers to add components to their workflow. These factors may be why few radiologists support simplification, whether with AI or without,” the group suggested.
In the future, as more organizations begin to integrate AI-enabled tools into their radiology practices, there is the possibility that the resultant improvements in workflow could off-set the additional time radiologists would need to manually check reports. However, how these technologies will impact clinical practice remains to be seen, the authors noted.