Radiologists interpret chest X-rays better with AI than without it
Australian researchers have created a deep learning model to help radiologists with their X-ray interpretations regardless of the organization they’re practicing in.
Chest X-rays are the most commonly used imaging test around the world, the group explained Thursday in The Lancet Digital Health. But with few trained thoracic specialists and high workloads contributing to errors, experts are increasingly turning to AI for help.
The Aussies in this study trained their model on more than 800,000 inpatient, outpatient and emergency images from datasets across the globe. When using the tool, radiologists reported more accurate interpretations for 80% of clinical findings, the authors reported.
Jarrel Seah, MBBS, a radiologist at Alfred Health in Melbourne, and colleagues said, to their knowledge, this is the most comprehensive model to date and has already been developed into a clinical decision-support tool.
“Overall, the model provided additional information to radiologists, facilitating improved decision-making and making interpretation more efficient,” Seah and co-authors added in the July 1 study. “Effective implementation of the model has the potential to augment clinicians and improve clinical practice,” they wrote later.
In addition to the more than 800,000 images, the team also trained their tool on 2,568 enriched chest X-rays from adults with at least one frontal exam. Twenty radiologists reviewed cases with and without deep learning assistance.
Without AI’s help, providers recorded a macro-averaged area under the receiver operating characteristic curve score of 0.713 compared to .808 with technological guidance. For 15% of the findings, the model proved statistically non-inferior.
Seah et al. say their model was created as a ready-to-implement tool and are researching to confirm that it can be used as a diagnostic adjunct in real-world settings.
Read the full study here.