AI model 98% accurate in echocardiogram classification—outperforming cardiologists
A study published online March 21 in Digital Medicine found an artificial intelligence (AI) platform classified echocardiogram views with 98 percent accuracy—outperforming board-certified cardiologists.
Researchers designed and trained a convolutional neural network (CNN) to recognize 15 different standard echocardiographic views. The training and validation set consisted of more than 200,000 unique images and a test set of more than 20,000 images.
The CNN model was accurate 97.8 percent of the time in recognizing videos and 100 percent accurate on seven of the 12 video views. In continuous-wave Doppler (CW), pulsed-wave Doppler (PW) and m-mode views, which all appear as still images in echocardiograms, the platform achieved, respectively, 98, 83 and 99 percent accuracies.
“View classification is the essential first step in interpreting echocardiograms,” wrote corresponding author Rima Arnaout with the Cardiovascular Research Institute at the University of California in San Francisco and colleagues. “We report here a single, vendor-agnostic deep-learning model that correctly classifies all types of echocardiogram recording … from all acquisition points relevant to a full standard transthoracic echocardiogram (parasternal, apical, subcostal and suprasternal), at accuracies that exceed those of board-certified echocardiographers given the same task.”
Arnaout gave a separate interview to IEEE Spectrum in which she noted the study was a limited task—a first step for cardiologists evaluating echocardiograms not likely to replace a human performing the same job.
“The best technique is still inside the head of the trained echocardiographer,” she said in the story.
“As cardiologists, we read the images and then go see the patient,” said Arnaout to IEEE. “[W]e’re both reading images and practicing medicine. I don’t think that second piece will be taken over so quickly.”