Upcoming radiology podcast challenges imaging experts to step up and question the negative AI hype

Experts are beginning to discuss the importance of creating not just accurate AI, but ethical algorithms.

John D. Banja, PhD, professor of medical ethics at Emory University in Atlanta, recently received a grant from the Advanced Radiology Services Foundation in Grand Rapids, Michigan, for a two-year project. Alongside faculty from Michigan State University's Department of Radiology, he'll explore this subject through research and an upcoming series of podcasts with radiology experts.

Banja, principal investigator of the grant and editor of the American Journal of Bioethics Neuroscience, spoke with HealthImaging about the project, AI in radiology and the importance of creating ethical technology.

HealthImaging: Where did your interest in radiology and the idea for this podcast come from?

John D. Banja: (This) stems from about 20 years of research that I’ve been doing on medical error and adverse outcomes of medical error—you bump into AI pretty quickly. Because radiology is very front and center in the AI revolution, I inevitably talked to some of the radiology folks at Emory and a lot of them are interested in the ethics and social issues in radiology.

What do you hope to do with the podcast and research in terms of ethics in AI?

Here’s the important part: We already know what a lot of the ethical issues are going to be…informed consent, privacy, data protection, ownership, all that kind of stuff. What we need to do is drill down to the next level, especially the practice level. That’s what we would like to do with this grant; we’d like to do scholarship and podcasts and try to envision what the immediate or near-term future is going to be like, and how it might impact the practice of radiology.

Your first podcast is going to discuss the hype that AI is supposed to replace radiologists, which you’ve said doesn’t appear likely. How can radiology move past those fears?

Banja: I hope this project will move that needle forward. Radiologists are feeling more comfortable about this (topic) this year compared to last. But they have to step up to the microphone and call out the hype-ologists; they have to get the word out in a very informed way. Radiologists have to try to reject that stereotype that all they do is sit in a hermetic cell and read images all day long—there’s no radiologist who does that. AI only takes over a sliver of the job functions of the everyday routine. The ethical problem here is if we’re going to lose medical students because of the negative hype, then the American population, the world population are going to be underserved. They (radiologists) are kind of the diagnostic heart of the hospital. You can’t afford to lose that.

Accurate data is an important part of training AI models. What ethical data challenges will radiology algorithms face?

Banja: If you think about the training dataset for an AI model, maybe 100,000 images, maybe 1 million. You know some of those images are labeled incorrectly. I was in Phoenix listening to a doctor—he’s not a radiologist—but he made a great point when he said this data that we’re using now to educate our algorithms will be obsolete in three to five years. In other words, we’re going to constantly have to upgrade the data we use to educate these algorithms because the technology that generates it is going to change. The data itself, the images, are going to change and we’re going to have to keep up with that. I think it’s going to be a constant learning process.

Who will be responsible when an algorithm makes an incorrect diagnosis?

The law professors are writing their articles right now. Is it the hardware manufacturer? Is it the programmer? Is it the coder? Is it the institution? Is it the doctor who became so reliant on the technology that he never checked the recommendation it gave? And by the way, that is going to happen. If these technologies start appearing at our hospitals over the next couple of years and they turn out to be really reliable and robust, the physician is going to trust them explicitly. They’re not even going to check what the AI is recommending, largely because hospitals are such a chaotic environment to begin with that anything which saves time will be used.

How can radiology make sure AI incorporates ethics?

We have genetic ethicists, we have neuro ethicists, we have business ethicists, we have journal ethicists, and I’m saying this field is so vast and so complex that that you could easily imagine people with ethics degrees and backgrounds devoting their career to study AI. It’s that rich and complex in ethical problems.

What can happen if radiology doesn’t incorporate ethics into AI?

What will happen is what has happened so much in the past. Some of the damage these systems could do could be really huge. Let’s say you have a malfunctioning AI in pathology and it’s persistently misreading slides, within a week that could affect thousands of patients. We have to be very ethically astute as we build these technologies, and we have to monitor and check them constantly. That’s going to be a real challenge.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

Positron, a New York-based nuclear imaging company, will now provide Upbeat Cardiology Solutions with advanced PET/CT systems and services. 

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.