Eye doctors using AI beat unassisted docs, AI alone at diagnosing diabetic vision loss

Google’s AI research group has shown that deep-learning algorithms can fine-tune ophthalmologists’ diagnosis of diabetic retinopathy on retinal fundus photographs, according to a study slated for publication in Ophthalmology. In the study, the physicians using the algorithm bested both AI alone and unassisted physicians on accuracy.

Lead author Rory Sayres, PhD, of Google Research in Mountain View, Calif., and colleagues had 10 ophthalmologists interpret 1,796 fundus images from 1,612 diabetic patients.

The researchers gave the same assignment to an algorithm they’d earlier developed for detecting diabetic retinopathy (DR), which is the most common cause of vision loss among people with diabetes, according to the NIH’s National Eye Institute.

The team had both groups of “readers,” human and machine, assess the images for severity of DR according to a widely used 5-point scale for grading DR severity from no apparent disease to proliferative diabetic retinopathy, the most advanced stage.

The researchers further compared interpretations by three sets of diagnostic methods—unassisted, grades only and grades plus heatmap.

Computing and comparing sensitivity and specificity, Sayres and colleagues found the ophthalmologists (five generalists, four retina specialists, one retina fellow) graded more accurately with model assistance than without for the grades-only condition.

Additionally, the algorithmic assistance increased retina specialists’ accuracy above that of unassisted readers as well as of the algorithm alone.

The deep-learning aid also increased the ophthalmologists’ grading confidence, although this gain came at the expense of increasing the time it took to complete their grading.

The authors noted the latter effect might be mitigated as physicians accumulate experience using the algorithm.

“Deep learning has shown great promise in training algorithms that are highly accurate in terms of disease detection,” Sayres et al. concluded. “The results of this study are promising, suggesting that with increased transparency, model assistance can boost reader performance beyond what is achievable by the model or reader alone.”

The study is available in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

Positron, a New York-based nuclear imaging company, will now provide Upbeat Cardiology Solutions with advanced PET/CT systems and services. 

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.