Deep learning network detects, localizes fractures on wrist x-rays

A team from Singapore demonstrated that an object detection convolutional neural network (CNN) could accurately detect and localize fractures on wrist x-rays, according to a Jan. 30 study published in Radiology: Artificial Intelligence. The method may be more verifiable than traditional CNNs.

Prior studies have demonstrated that CNNs can detect fractures on radiographs, wrote first author Yee Liang Thian, MD, of National University of Singapore and colleagues, but these methods focus on classifying images as either fractures or non-fractures with no localization component. This broad classification makes it hard for clinicians to verify results.  

“The task of object detection involves two fundamental questions about an image: what object is in it, and where it is within the image,” Thian et al. added. “This is in contrast to prior studies involving deep learning that approached fracture detection as an image classification problem, which describes what is in the image, but not where it is.”

The authors extracted more than 7,300 wrist radiographs from a hospital PACS, and radiologists annotated all radius and ulna fractures. Ninety percent of the data was used to train the model, 10 percent of the images were saved for validation.

Overall, the model detected and correctly localized 91 percent (310/340 images) and 96 percent (236/245 images) of all radius and ulna fractures on the frontal and lateral views, respectively. On a per-image basis the CNN achieved a sensitivity, specificity and AUC of 96 percent, 83 percent and 0.92, respectively for the frontal view. For the lateral views those numbers were 97 percent, 86 percent and 0.93, respectively. The per-study sensitivity, specificity and AUC were 98 percent, 73 percent and 0.89, respectively.

“The object detection network used in our study provides classification as well as spatial localization information, which is more informative than a single classification label and easily verifiable by the clinician,” the authors wrote. “Such location information would be useful in developing deep learning clinical algorithms to aid radiologists in reporting.”

Thian and colleagues did report their network made false-positive labels on old fractures or deformities, which they believed pointed to overlap in learned features of acute fractures. Despite this, the group wrote they demonstrated the feasibility of their object detection network which may provide an important stepping stone for future AI development.

 “The ability to predict location information of abnormality with deep neural networks is an important step toward developing clinically useful artificial intelligence tools to augment radiologist reporting,” the authors concluded.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup