A ‘powerful’ tool: Commercially available AI significantly outperforms radiologist reads for trauma imaging
Commercially available AI software was recently found to be significantly more accurate for detecting an array of skeletal lesions than radiologists, particularly for trauma radiographs.
The software was put to the test on nearly 5,000 trauma radiographs, assessing for the presence of fractures, dislocations, elbow effusions and focal bone lesions (FBL). In some cases, it outperformed radiologists by as many as 82 points.
Aside from fractures, radiologists are also tasked with assessing trauma radiographs for dislocations, bony lesions and joint effusions. Trauma imaging is particularly prone to reader error, but experts believe that artificial intelligence applications could support radiologists in avoiding costly mistakes.
They shared evidence to support their case recently in the European Journal of Radiology.
“Trauma X-ray reading is often delayed or assigned to non-expert readers due to the lack of experienced radiologists and an increased workload in imaging. Therefore, the rate of diagnostic errors on trauma radiographs is one of the highest, potentially leading to malpractice claims,” corresponding author on the study Nor-Eddine Regnard, of the South Ile-de-France Imaging Network, and colleagues shared.
To compare how AI compared to radiologists for assessing traumatic injuries on x-ray, the experts used a commercially available software—BoneView—to retrospectively analyze three consecutive months of radiographs from 14 different centers within a private imaging group. Each exam was completed after a traumatic injury to the limbs or pelvis. Reports were used to compare detection performance and a senior skeletal radiologist reviewed any resulting discrepancies.
AI outperformed radiologists in lesion-wise sensitivity for fracture detection at 98.1% compared to 73.7%. This trend continued for identifying dislocations (89.9% vs 63.3%), elbow effusions (91.5% vs 84.7%) and focal bone lesions (98.1% vs 16.1%). Radiologist specificity was always 100%, while the software’s ranged from 88% to 99.8%, and negative predictive values were impressive as well.
Authors of the study highlighted the fact that their research offers a real-world comparison of radiologists and AI, stating that when coupled with human readers, AI could be a “powerful tool” in clinical practice that increases accuracy and speeds up read times.
The study abstract can be viewed here.