Radiology department automates process for spotting reports with actionable findings
Relaying actionable findings to referring clinicians is a crucial part of radiologists’ responsibilities. And a new tool can help ensure there are minimal gaps in communicating such information.
University of Tokyo researchers evaluated four methods for discerning radiology reports flagged as “actionable” in clinical practice from “non-actionable” ones. A publicly available tool beat out other machine learning models at the task, experts reported Sept. 11 in BMC Medical Informatics and Decision Making.
This natural language processing technique can help clinicians spot mentions of actionable findings that may have gone overlooked and avoid costly delays to patient care, according to Yuta Nakamura, with The University of Tokyo Hospital’s Department of Radiology, and co-authors.
“The results of this study suggest that the radiologists may have sometimes thought that actionable findings were present in the radiological images without explicitly urging further clinical examinations or treatments in the radiology report,” Nakamura et al. wrote, adding their method can key in on these cases and bring them to the physicians’ attention.
Back in September 2019, the hospital embedded a function in its system allowing clinicians to label radiology reports with an actionable tag. For their study, Nakamura et al. sought to determine which models could best distinguish such actionable reports from non-actionable documents.
They included more than 90,000 rad reports from their institution, and 788 (0.87%) were labeled as actionable. A new natural language processing tool, known as bidirectional encoder representations from transformers or BERT, beat out all others.
Without order information—such as “suspected diseases or indications”—BERT notched the highest area under the precision-recall curve (0.51) and area under the receiver operating characteristic curve (0.95). Adding order info, however, did not lead to meaningful change.
BERT may have been most effective, the team noted, because implicit reports often emphasized actionable abnormalities, and the system’s classifiers also spotted recommendations for follow-up care explicitly highlighted in rad reports.
Larger data pools from multiple organizations will be needed to refine the technique, according to the authors, but the early returns are promising.
“The results showed that our method based on BERT is more useful for distinguishing various actionable radiology reports from non-actionable ones than models based on other deep learning methods or statistical machine learning,” the group concluded.
Read the full study here.