New-look AI model for imaging results provides its own second opinions
Researchers have developed a new “dual-view” artificial intelligence (AI) algorithm capable of providing its own second opinion, sharing their findings in Nature Machine Intelligence.[1]
The new-look algorithm is divided into two distinct parts; those parts are then able to work together to catch any potential errors made along the way.
“One part of the AI system tries to mimic how radiologists read medical images by labelling them, while the other part of the system judges the quality of the AI-generated labelled scans by benchmarking them against the limited labelled scans provided by radiologists,” Himashi Peiris, a PhD candidate with the department of electrical and computer systems engineering at Monash University in Australia, explained in a prepared statement.
The group believes its research could save health systems a significant amount of time compared to more typical workflows.
“Traditionally radiologists and other medical experts annotate, or label, medical scans by hand highlighting specific areas of interest, such as tumors or other lesions,” Peiris said. “These labels provide guidance or supervision for training AI models. This method relies on the subjective interpretation of individuals, is time-consuming and prone to errors and extended waiting periods for patients seeking treatments.”
This new approach, on the other hand, takes almost no time at all.
Peiris et al. tested the effectiveness of their algorithm by comparing it to four other techniques. Their comparison included public data from a variety of modalities, including CT and MRI. Overall, the team found that their dual-view AI model was linked to an average improvement of 3% compared the other imaging evaluation methods.
“Our algorithm has produced groundbreaking results in semi-supervised learning, surpassing previous state-of-the-art methods,” Peiris said. “It demonstrates remarkable performance even with limited annotations, unlike algorithms that rely on large volumes of annotated data.”
Click here to read the full analysis in Nature Machine Intelligence.