AI detects twice as many incidental lesions as radiologists
Artificial intelligence could help providers identify incidental breast lesions on routine CT imaging patients undergo for other clinical indications.
Prior studies have indicated that breast lesions are incidentally detected on up to nearly 8% of CT scans that include the chest, though some experts believe this figure is likely higher. Concerningly, these incidental lesions turn out to be cancerous in up to 70% of cases, making their timely detection critical.
Considering the high utilization of CT imaging, there are ample opportunities to spot these lesions in routine settings. However, identifying them, especially when they are not the focus of radiologists’ reads, can be challenging, authors of a new paper in Current Problems in Diagnostic Radiology write.
“Computed tomography represents an opportunity for early detection, as breast tissue is routinely imaged on chest CT examinations,” Tyler J. Fraum, MD, with the Mallinckrodt Institute of Radiology at the Washington University School of Medicine in St. Louis, and colleagues suggest. “Breast lesions, especially when small, can be challenging to detect on CT, which provides lower spatial resolution compared with mammography and lower contrast resolution compared with magnetic resonance imaging. Even large lesions may be obscured by fibroglandular tissue, particularly on unenhanced CT. Moreover, radiologists may miss breast lesions due to their peripheral locations or incidental nature.”
Spotting incidental lesions is an area where many believe AI can shine, as it detects findings the human eye might miss. For this study, experts sought to determine if an algorithm could efficiently detect radiologically significant incidental breast lesions missed by original interpreting radiologists on chest CT examinations. Visual classifier and natural language processing algorithms were tested on a dataset of more than 3,500 chest CT scans to see how many RSIBLs would be flagged in comparison to those who originally interpreted the exams.
Of the 3,541 exams included in the analysis, 92.6% were marked negative by both algorithms. When two radiologists reviewed the exams that were flagged, 76 significant incidental breast lesions were confirmed. Compared to the original interpreting radiologist, the algorithm identified more than double the number of incidental lesions, though it triggered more false positives. The group estimated that use of the algorithms could have reduced the number of images viewed by radiologists by more than 97%.
“AI algorithms can identify substantially more [radiologically significant incidental breast lesions] than radiologists but with many more false positives,” the team writes. “Nevertheless, workflows can be established to allow efficient review of flagged cases by radiologists to identify true RSIBLs in need of further evaluation (including follow-up dedicated breast imaging or comparison with prior examinations).”
Though the algorithm significantly outperformed standard peer review methods, the group acknowledges that additional validation is needed before the method can be deployed in clinical settings.
Learn more about the findings here.