AI trained on synthetic images makes accurate cancer diagnoses
A machine learning method trained on synthetic breast ultrasound elastography images accurately classified tumors when applied to real-world images, according to a new study published in the August issue of Computer Methods in Applied Mechanics and Engineering.
The elastic heterogeneity of a tumor, the authors wrote, is one piece of information that can help classify benign tumors from those that are malignant, but there are many complicated steps required to obtain the images necessary to make that distinction. Therefore, Assad Oberai, with The University of Southern California’s (USC)’s School of Engineering created synthetic data using a physics-based approach.
“…in the case of medical imaging, you’re lucky if you have 1,000 images,” Oberai said in a USC news release. “In situations like this, where data is scarce, these kinds of techniques become important.”
The researchers trained a 4-layer convolutional neural network (CNN) on 4,000 synthetic images and a 5-layer CNN using 8,000 samples. The goal was for AI to classify features of a benign tumor versus a malignant one.
Overall, the methods classified tumors with an accuracy between 99.7% to 99.9%. Then, the team tested the 4-layer CNN on images taken from 10 patients who presented with breast lesions, five were benign and five malignant. Oberai and colleagues found the CNN achieved an accuracy of 80%.
“While this level of accuracy is comparable with other analysis techniques used in conjunction with elastography, what is remarkable here is that in training the CNN no real data was used,” the authors wrote. “It is likely if the net were re-trained while accounting for real data, perhaps with higher weights, it would perform better.”
In the same USC news story, Oberai made mention that such a tool would likely only be useful as an aid to radiologists; the black-box nature of many algorithms is still too much of a risk when dealing with patient health.
“The general consensus is these types of algorithms have a significant role to play, including from imaging professionals whom it will impact the most. However, these algorithms will be most useful when they do not serve as black boxes,” Oberai said. “What did it see that led it to the final conclusion? The algorithm must be explainable for it to work as intended.”