Deep learning reads x-rays to prevent mispositioned feeding tubes

A deep learning platform can accurately distinguish critical from non-critical feeding tube placement on radiographs, according to a recent study published in the Journal of Digital Imaging.

“Nasoenteric feeding tube placement must be confirmed prior to the commencement of tube feeding to subvert the catastrophic complications of bronchial or esophageal placement, which include aspiration, pneumonia, respiratory failure, pulmonary fistula formation, empyema, and death,” wrote Varun Singh, the Department of Radiology at Thomas Jefferson University in Philadelphia, and colleagues.

Because clinical demands often delay the review of these radiographs until hours after the studies are performed, a computer-aided detection (CAD) system that could expedite detection of critical results and triage patient care appropriately would be invaluable.”

The researchers sought to determine if a deep convolutional neural network could classify nasoenteric feeding tube position on x-rays and, second, if it could distinguish between critical bronchial insertion and non-critical placement.

To do so, they used 5,475 deidentified frontal view chest and abdominal x-rays—174 of which were bronchial tube insertions and another 5,301 non-critical radiographs—to train three neural networks. More than 4,700 images were used for training, while 630 were used for validation and 100 for testing. Ground truth for enteric tube placement was done by two board-certified radiologists.

Overall, the neural networks offered an “encouraging” solution to classifying critical versus non-critical placement, scoring an AUC of 0.87. The top performing network—pretrained Inception V3—beat its untrained counterpart (AUC of 0.60) along with the other two neural networks. All pretrained networks performed better than those that were untrained.

Singh and colleagues admitted the small datasets left the model vulnerable to overfitting, but dropout for regularization was a “major strategy” used to combat it.

“A concerted human-machine approach with a validated, accurate network classifier to triage and prioritize critical findings for radiologist review could improve the detection time of bronchial insertions and clinical workflow,” the authors concluded. “Other ways to improve the feeding tube placement classifier include using other neural network architectures, ensembling multiple deep convolutional neural networks, acquiring a larger dataset, and employing strategic preprocessing techniques well-suited to assist DCNNs in radiographic feature extraction.”

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.
 

The two companies aim to improve patient access to high-quality MRI scans by combining their artificial intelligence capabilities.