Deep convolutional neural networks may improve MRI segmentation
New research published online Feb. 27 in the Journal of Digital Imaging described how deep convolutional neural networks (CNNs) may improve MRI brain tumor segmentation.
Computer engineering researchers from Islamic Azad University, Rasht Branch and the University of Guilan in Rasht, Iran, proposed a high-capacity deep convolutional neural networks (DCNN) for the study.
"A DCNN was proposed for more accurate and faster segmentation of the brain MR images to help physicians in the diagnosis and treatment of brain tumors," wrote lead author Farnaz Hoseini, PhD, from Islamic Azad University. "The purpose was to separate damaged tissues, despite their low-contrast and Y-shaped structure in segmentation images, as well as to resolve the imbalance in the training dataset."
The DCNN contained architecture and learning algorithms to better identify brain tumors of different shapes, sizes, brightness, textures and locations. Differentiating between normal and abnormal tissues is essential during the process, because some brain tumors are harder to locate than others.
"The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively," the researchers wrote.
The DCNN's architecture contained five convolutional layers, all using three-by-three kernels and one fully connected layer. An advantage of using small kernels with fold "allows making the effect of larger kernels with smaller number of parameters and fewer computations." In addition, the learning algorithm used in the DCNN consists of 10 steps.
Convolution is the base of DCNNs, the researchers explained, because it solves the problem of having many parameters in neural networks using sparse connections with a discriminative method. The entire networks can be put into graphic processing unit memory and boost network speed using deep learning tools.
"Due to the overlapping of neighbor layers, local correlation is used in such networks, and multiple and unique features are detected by weighing the layers in each sub-layer," the researchers wrote.
The researchers used the Dice Similarity Coefficient metric and found accuracy results on the BRATS 2016 brain tumor segmentation challenge data set for the complete, core and enhancing regions (0.90, 0.85 and 0.84). The learning algorithm used included task-level parallelism and all MR images were classified for segmentation using a patch-based approach for, explained Hoseini et al. Overall, researchers found that the proposed DCNN increased the segmentation accuracy of brain tumors.
"Given that the number of labeled specimens in dataset is low, it is not possible to design models with greater depth for segmentation, since models with more depth would have more parameters leading to over-fitting in the conditions of data deficit," Hoseini et al. concluded. "On the other hand, because of the limited memory space and the number of parallel GPUs, the use of higher-volume data is difficult to handle. With the advancement of GPUs and the use of libraries that distribute processing across multiple GPUs and multiple machines, this problem can be addressed."