Interpretive AI for medical imaging: 5 points of skepticism, idealism

Surveying the landscape of interpretive AI in radiology, two researchers note a yawning gap between great expectations set in the recent past and actual clinical implementations as of spring 2023.

Nevertheless, the duo remains hopeful, albeit reservedly so.

“As healthcare professionals increasingly use radiologic AI and as large language models continue to evolve, the future of AI in medical imaging appears bright,” they state. “However, it remains uncertain whether the traditional practice of radiology, in its current form, will share this promising outlook.”

The authors are biomedical informaticist Pranav Rajpurkar, PhD, of Harvard and radiologist Matthew Lungren, MD, MPH, of Microsoft and UC-San Francisco. The New England Journal of Medicine published their article May 25 [1].

As examples of factors separating anticipated inroads from actual impacts, Rajpurkar and Lungren name the limited generalizability of AI models, the absence of data from prospective real-world studies and the scarcity of comprehensive AI solutions for image interpretation.

Among their predictions of advances likely to clear looming hurdles and transform the specialty:

1. Widespread adoption of the technology requires vast swaths of the radiologist workforce to accept downsides along with upsides—even for FDA-approved offerings. And yet:

We expect that the eventual resolution of these issues and more comprehensive solutions, including the development of new foundation models, will lead to broader adoption of AI within the healthcare sector.”

 

2. Trust in AI’s recommendations doesn’t automatically grow when explainable AI displaces “black box” models in validation exercises. What’s needed, the authors suggest, is adequate evidence showing the technology can meaningfully assist radiologists in real-world clinical workflows. Eventually,

this approach will enable us to better understand the effectiveness and limitations of AI in clinical practice and establish safeguards for effective clinician–AI collaboration.”

 

3. Generalist AI models that can contribute to every aspect of image interpretation, including contextualizing medical histories and generating radiology reports, still seems a long way off. However:

Early studies of such models have shown that they can detect several diseases on images at an expert level without requiring further annotation, a capability known as zero-shot learning.”

 

4. Recent studies of large-language AI models like ChatGPT impress casual observers while reminding clinical users how young the models are and how far they have to go. But potential isn’t nothing:

We anticipate that future [large-language] AI models will be able to process imaging data, speech, and medical text and generate outputs such as free-text explanations, spoken recommendations, and image annotations that reflect advanced medical reasoning.”

 

5. The extent to which large-language models can worsen existing problems “remains unknown and is an important area for study and concern,” Rajpurkar and Lungren point out. All the same,

the potential for generalist medical AI models to provide comprehensive solutions to the task of interpretation of radiologic images and beyond is likely to transform not only the field of radiology but also healthcare more broadly.”

Paper posted here (behind a paywall).

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup