Might AI automation improve peer review?

As the use of artificial intelligence grows in radiology, research continues to offer evidence to support its utility in detecting diseases, predicting outcomes and—most recently—its potential for facilitating peer reviews. 

In a recent analysis published in the Journal of the American College of Radiology, experts shared how a combination of visual classification software and a natural language processing algorithm was able to spot missed suspicious liver lesions (SLLs) on CT pulmonary angiography exams and their corresponding reports. After assessing more than 2,000 CTPAs, experts involved in the research concluded that the use of AI to identify errors “dramatically reduced” the amount of radiologist involvement required to initiate peer reviews [1].

Methods of conducting meaningful reviews typically require radiologists to manually identify errors to submit for further review, which is time-consuming and limits opportunities to spot misdiagnoses, authors of the paper explained. In these instances, the authors proposed that an automated system could save radiologists’ time while also increasing the amount of potential cases eligible for review. 

“Particularly in instances where diagnostic errors are infrequent but clinically significant and detection by random selection would be unlikely, AI could expedite case identification for peer review and potential issuance of a report addendum,” corresponding author Sarah P. Thomas, MD, an Abdominal Imaging Fellow with the Department of Radiology at Duke University Medical Center, and colleagues wrote. 

For the research, the automated software was used on 2,573 CTPAs from a multisite teleradiology practice to assess images for the presence or absence of SLLs. Simultaneously, a natural language processing algorithm was applied to corresponding reports to identify instances of SLLs (or lack thereof) in text. This resulted in 136 cases of potentially discrepant findings, which radiologists reviewed and further narrowed down to 13 cases with confirmed missed SLLs. 

With the software’s help, the ratio of CTs requiring radiologist review to missed SLLs identified was 10:1, the experts shared, adding that without the help of AI that ratio would be at least 66:1. 

“AI-augmented peer review can allow for the rapid and efficient review of more cases than feasible by human efforts alone, with relatively little direct radiologist effort,” the authors suggested. “AI-assisted peer review also has the advantage of being blind to the initial reader, potentially removing the bias that comes with reviewing a colleague’s cases.” 

While the researchers maintained that the automated process saved radiologists time, this could not be quantified by relative numbers of exams and images because the readers did not time themselves. They suggested that future work should consider this limitation to better predict how AI assistance can facilitate meaningful reviews. 

“Once 'missed' cases have been identified, this data combined with additional information such as scan parameters, radiologist experience, time spent reviewing the case, and time of day, could be leveraged to identify predictors of clinical errors,” they concluded. 

The study abstract is available here

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Trimed Popup
Trimed Popup