GPT-4 helps ensure recommendations for additional imaging aren't overlooked in reports
Recommendations for additional imaging are routinely included in radiology reports but are sometimes overlooked or not communicated in a timely manner. Experts believe large language models can help address these lapses in care.
LLMs have garnered great interest in radiology for their potential to lighten some of the administrative burdens departments face, like putting together structured reports, generating impressions, detecting errors, formatting, labeling and more. The authors of a new paper in the American Journal of Roentgenology postulate these models also can identify recommendations for additional imaging that sometimes fall between the cracks.
“Automated extraction of actionable details of recommendations for additional imaging (RAIs) from radiology reports could facilitate tracking and timely completion of clinically necessary RAIs and thereby potentially reduce diagnostic delays,” Ramin Khorasani, MD, with the Center for Evidence-Based Imaging at Brigham and Women's Hospital, and colleagues suggested.
To test the ability of LLMs to spot additional imaging recommendations, the team utilized two versions of OpenAI’s ChatGPT—GPT-3.5 and GPT-4. They presented the LLMs with 250 randomly selected reports spanning five different subspecialties, 25 of which were used engineer a prompt that would instruct the models to extract details about the modality, body part, timeframe and rationale of the RAI when presented with report text.
The LLMs were instructed to use the impressions section of the reports only. A 4th-year medical student and radiologist from the relevant subspecialty assessed the models’ responses for accuracy.
GPT-3.5 was slightly outmatched by GPT-4, though both models performed well. Combined, the models garnered an overall accuracy of 95.6% and 94.2% for referencing the correct modality, 89.3% and 88.3% for body part, 96.1% and 95.1% for timeframe and 89.8% and 88.8% for rationale, respectively.
Based on these findings, the group suggested that LLMs could play an especially beneficial role in ensuring imaging recommendations do not go overlooked in the future.
“The technique could represent an innovative method to facilitate timely completion of clinically necessary radiologist recommendations,” they concluded.
Learn more about the findings here.