How 'mindlessly' following AI guidance impacts radiologist performance

Although artificial intelligence assistance has proven to be of great benefit to radiologists interpreting mammograms, new data bring pause to experts’ optimism relative to AI, due to the bias it can spark. 

A new paper published in Radiology highlights how automation bias can impair the performance of radiologists across a range of experience levels, even the most seasoned among them. Radiologists interpreting screening mammograms may be especially susceptible to falling victim to such bias, as these exams are repetitive in nature, lead author of the study Thomas Dratsch, MD, PhD, from the Institute of Diagnostic and Interventional Radiology at University Hospital Cologne in Germany, and colleagues explained. 

“In an ideal scenario, radiologists correctly integrate the information provided by AI, benefiting from cases in which the AI provides a better suggestion and ignoring cases in which AI makes an incorrect prediction, and produce better diagnostic reports than they would without AI,” the group noted. 

However, Dratsch and colleagues suggested that once radiologists build trust in assistive AI technology, there is a concern that overreliance on the system could cause them to “stop critically engaging with the AI results and start mindlessly following them.”  

If the team’s recent findings are any indication of how an overreliance on AI can affect radiologists’ performance, then experts have reason to be concerned about automation bias.  

For their work, the team tasked 27 radiologists with providing Breast Imaging Reporting and Data System (BI-RADS) assessments for 50 mammograms with the help of an AI system. Exams contained a mix of correct and incorrect BI-RADS category suggestions by AI. 

The AI system’s prediction had a significant impact on the accuracy of every group of radiologists (inexperienced, moderately experienced and very experienced). Readers were more likely to assign an incorrect BI-RADS category when the AI system suggested the same category and vice versa.  

This finding was particularly noteworthy in radiologists with less experience, with their accuracy falling to less than 20% when the AI system made an incorrect suggestion. However, even the most experienced of the cohort—those who had worked for 15 years or more as a radiologist—were victim to automation bias as well, recording a significant dip in accuracy when the system suggested an incorrect category (down to 45.5% from 82%). 

“We anticipated that inaccurate AI predictions would influence the decisions made by radiologists in our study, particularly those with less experience,” the group noted. “Nonetheless, it was surprising to find that even highly experienced radiologists were adversely impacted by the AI system’s judgments, albeit to a lesser extent than their less seasoned counterparts.” 

These findings further emphasize the need for safeguards relative to AI integration into radiology workflows, the team suggested. In the future, the group intends to explore possible measures to reduce automation bias among radiologists. 

The study abstract can be viewed here

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Trimed Popup
Trimed Popup