Error rates in radiology have not changed in 75 years

 

In 1949, former RSNA President Leo Henry Garland, MD, was the first to conducted the first study to measure the error rate of radiologists. He found experienced radiologists would miss important findings in approximately 30% of chest radiographs. Despite all the advancements in technology to improve image quality, 75 years later that error rate has remained about the same.

"If you measure that same error rate today, it is the same, and yet everything about radiology has changed. Dr. Garland today would not recognize my practice, and I would think that his practice was quaint. Yet we have the same error rate, and why is that? Obviously it is because of the human factor. We have not had any appreciable human evolution since 1949. "We are not any better with our eyes and brain now as compared to then," explained Michael Bruno, MD, MS, FACR, vice chair for quality and patient safety, chief of the Division of Emergency Radiology, Penn State Milton S. Hershey Medical Center. He spoke in a session about this at the Radiological Society of North America (RSNA) 2023 meeting, and in an interview with Health Imaging

Three types of radiologist errors

In his interview with Health Imaging, Bruno outlined three areas of common errors that lead radiologists to miss finding or misinterpret images:

1. Error in how we work: Trying to do too much too fast, especially when the radiologist is tired. Bruno said scheduling changes can be used to help mitigate these errors, but not entirely.

2. Error in how we think: Humans tend to have biases and radiologists are no exception, Bruno said. "We also tend to take in information that supports our hypothesis and ignore information that doesn't. And a lot of times we are overconfident, or under confident in our decision making," he explained, adding that these biases can be overcome if radiologists try to remain aware of that they may not be seeing the full picture.

3. Not seeing something: This is the most common type of mistake, where radiologists simply miss things because they do not see them and miss a finding on an image. Bruno said this is why peer review is a practice providers must adhere to.

"I think these are just errors on how we are made, our biology, our eyes, our brain. We did not evolve to practice radiology. We have this fixed error rate, and I really think it has to do what is going on in our noggin," he added.

Bruno said Penn State has done a lot of research on this, along with Emory, Brown and Johns Hopkins. He said they want to better understand what goes in radiologists from a neurocognitive standpoint when they are reading exams and make errors. 

"These most common types of errors are biologically mediated. We have all had this experience when you don't see something on an image and latter you come across that image, or maybe a colleague points it out to you, and you look at something you missed and it is so obvious and say how could I have possibly missed that. It turns out there is a neurobiological mechanism. It was true in 1949 when Garland first recognized this, and it is still true now," Bruno explained. He said better understanding this mechanism will enable recommendations on biofeedback to help address this commonly encountered error.

Humans can't see 'gorillas' in CT scans

Bruno said a famous study a decade ago ago tested this theory about issues with not seeing things in medical images The study asked expert radiologists to find a lung nodule and report other incidental findings—which included a picture of a gorilla.

A literal gorilla image was placed with varying opacity in five slices in the lung CT dataset to see how many observes would notice. The study found 83% of radiologists failed to see it, and 45% did not see the nodule because of inattentional blindness. 

Bruno was one of the expert readers who participated—and he missed the gorilla.

"We have to deal with this human factor. Good radiologists who are doing the right thing, who are knowledgeable, experienced, well trained, well rested, had their coffee, sitting the right chair, and their displays are bright enough, are still going to have a 3-4% error rate," he said.

Bruno mentioned a study that looked at error rates in mammography exam reports showing a false positive rate overall of 4.4%, with error rates fluctuating based on the time of day—meaning, being tired can cause radiologists to miss findings or see something on a report that isn't there. Other studies have also shown that fatigue, screen time, and energy levels can also impact radiology image interpretation.

AI as a second set of eyes

One possible way to address the error rate and help radiologists catch more missed or incidental findings, even before a peer review double read of an exam by another human, is by using artificial intelligence (AI).

In human peer review reads, Bruno said ideally the first and second radiologist have not spoken to each other about the exam to avoid any potential bias being introduced. But with AI, algorithms that highlight and area with a color shading or circles on the image might create a bias for the human reader, which could lead to an increase in false positives or possibly misinterpreting the features because of an AI suggested diagnosis.

"AI could be that second set of eyes. But it depends on how the AI interacts with the human, and I think we need a lot more work on that," Bruno said.

He added that there needs to be more work on the human-AI interface. For this reason, Bruno said it is unlikely heat maps and color coding will be the final form of how AI alerts radiologists to suspected findings— in fact, the U.S. Food and Drug Administration (FDA) does not allow heat mapping overlays in cleared AI tech.

Dave Fornell is a digital editor with Cardiovascular Business and Radiology Business magazines. He has been covering healthcare for more than 16 years.

Dave Fornell has covered healthcare for more than 17 years, with a focus in cardiology and radiology. Fornell is a 5-time winner of a Jesse H. Neal Award, the most prestigious editorial honors in the field of specialized journalism. The wins included best technical content, best use of social media and best COVID-19 coverage. Fornell was also a three-time Neal finalist for best range of work by a single author. He produces more than 100 editorial videos each year, most of them interviews with key opinion leaders in medicine. He also writes technical articles, covers key trends, conducts video hospital site visits, and is very involved with social media. E-mail: dfornell@innovatehealthcare.com

Around the web

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.