Q&A: J. Raymond Geis on ethical AI in radiology

A number of leading imaging societies have published a multisociety statement on ethical AI in radiology. It was designed to start a dialogue among radiologist to create an ethical framework for the rapidly growing technology. One of its chief contributors, J. Raymond Geis, MD, ACR Data Science Institute senior scientist, spoke with HealthImaging about ethical AI and why it’s important to radiology.

HI: What do you hope radiologists take away from this document?

GeisGeis: We hope this paper starts a rich discussion of ethics of AI in radiology, describes a unique moral issue that all of us should consider and points the direction for future regulation and standards of ethical radiology AI. This international multisociety statement is one step to help the radiology community build an ethical framework to steer technological development, influence how stakeholders respond to and use AI and implement these tools to do right for patients.

HI: Data is a large part of ethical AI. How can radiologists adopt ‘data truthfulness’ into AI?

Geis: When it comes to AI of any sort, data are the main focus and take almost all the time and effort of any project. Ultimately, you want your data to be clean, meaning it is all there and has no missing or otherwise confusing parts. It should be consistent, unambiguous, and you are certain that it is done right—often 90% of an AI project. All data are biased to some degree, no matter how. The trick is figuring out how, and what affect the bias has on the decisions made. This is a major issue with ethical AI.

HI: The ‘black box’ nature of AI is brought up a lot. Will AI ever be fully accepted by radiologists and patients if it’s not fully transparent?

Geis: It’s unlikely that it will ever be fully transparent. But a human making decisions isn’t likely fully transparent.

HI: Maybe that was the wrong word. How about explainable?

Geis: Explainable is probably a better word. There is a whole field called explainable AI, which tries to learn at least enough about what the algorithm is doing so that you can tell if it makes a mistake on a patient. Maybe it missed a cancer. You’d like to go back and see why it did that. I think those sorts of things will be getting better all the time.

This is a very important era we’re in right now. We’re building these things (AI) and we’re trying them out on a small scale to learn what sort of explainability we need. What’s going to work from a technical standpoint? And from an ethical standpoint, are we learning enough so we can tell patients they can trust the AI? We don’t have those answers right now, but that’s part of the reason that so many people were interested in writing this paper.

HI: This statement was described as ‘aspirational.’ Why wasn’t it more prescriptive?

Geis: Some of these things, in terms of being more prescriptive, will rely on regulations; and various countries or political bodies will have different approaches. We felt it was important to try and get ahead of the conversation now. We absolutely feel that that will be our next step.

The laws themselves, or specific codes of conduct that the ACR or other societies in the U.S. may develop, will probably be focused on the communities they specifically serve. We in the ACR talk with the FDA on a regular basis as well as MITA and other vendor organization about what’s both desirable and practical in terms of setting up rules, regulations and standards to make sure we’re doing the right thing.

HI: What are the next steps for creating ethical AI?

Geis: The next steps will be to become more prescriptive. Develop codes of conduct. As an aside, it feels like we’re all extremely focused on doing the right things, something I haven’t always sensed in discussions on standards for other technology. We’ve seen first-hand the good, bad and ugly in other tech fields. There’s so much opportunity here for doing well by doing right, and hopefully we can put up enough barriers to minimize unethical behavior.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.