A new deep learning tool only needs one pass at a 3D MRI scan to classify brain tumors into one of six common categories, according to an analysis published Wednesday.
Researchers from the renowned Mallinckrodt Institute of Radiology developed their convolutional neural network using more than 2,000 scans from various institutions.
The software performed well at identifying whether a tumor was present and at categorizing the intracranial abnormalities, the Washington University School of Medicine in St. Louis team reported Aug. 11 in Radiology: Artificial Intelligence.
“These results suggest that deep learning is a promising approach for automated classification and evaluation of brain tumors,” Satrajit Chakrabarty, MS, a doctoral student on the project at Mallinckrodt’s Computational Imaging Lab, explained. “The model achieved high accuracy on a heterogeneous dataset and showed excellent generalization capabilities on unseen testing data.”
The 2,105 3D scans were divided into a training set (1,396 images), internal testing group (361) and external testing dataset (348). The algorithm was taught to discern between healthy and scans with tumors using the first image group, and to classify tumors according to type.
On internal data, the tool was 93% accurate at assessing six tumor classes and one class of healthy images. Its ability to identify patients with a negative test without a tumor (negative predictive value) ranged from 98%-100%. The positive predictive value, meanwhile, fell between 85%-100%.
Analyzing external data, which included only high-grade and low-grade gliomas, the model reached an accuracy of 92%.
The researchers see this platform as an improvement upon traditional 2D approaches, which require radiologists to manually determine tumor area on an MRI prior to submitting it to a machine. But they also believe it can be used for other brain tumor types and neurological disorders.
“This network is the first step toward developing an artificial intelligence-augmented radiology workflow that can support image interpretation by providing quantitative information and statistics,” Chakrabarty concluded.
Read the entire study here.