AI needs ethical standards, but are radiologists ready?
Artificial intelligence (AI) is becoming an essential part of radiology, meaning the industry must examine ethics for computers and AI, wrote authors Marc Kohli, MD, director of clinical informatics at the University of California, San Fransisco and Raym Geis, MD, an associate professor of radiology at National Jewish Health in Fort Collins, Colorado, in an article published July 15 in the Journal of the American College of Radiology.
"The rapid evolution of autonomous and intelligent systems in radiology calls for us to update radiology’s ethics and code of behavior for [AI] in radiology," Kohli and Geis wrote. "This code must be continually reassessed as these systems become more complex and autonomous."
Ethical issues for AI systems in radiology belong to the following three categories, according to Kohli and Geis:
- Data (including generation, recording, curation, processing, dissemination, sharing and use).
- Algorithms (including AI, artificial agents, machine learning and deep learning).
- Practices (including responsible innovation, programming, hacking and professional codes to formulate and support morally good solutions).
Data
Informed consent, privacy and data protection, data ownership, objectivity and the overall in resources to analyze large data sets are the five key areas of data ethics, according to the authors. Because new uses for patient data continue to be developed, release of information, data use agreements and institutional review board requirements should also meet appropriate privacy standards.
"We need to consider the shifting balance between maintaining personal information privacy and advancing the frontier of intelligent machines," Kohli and Geis wrote, suggesting that a solution could be for patients to sign data use agreements with third parties contributing to their digital health record in order to document data quality, security and use.
The authors question the idea of data ownership regarding these potential policies, asking, "Do these policies hold, though, if the data are used to build a highly profitable artificial intelligence product? Who owns the intellectual property generated from the analysis of aggregated data sets?"
Kohli and Geis explained that academic and commercial data practices must develop policies that balance the greater good, in this case, "advancing the frontier of intelligent machines.” Additionally, data used to train or validate AI models should be subjected to "version control" or change tracking.
Algorithms
Concerns for the practice of ethical design and audit algorithm requirements include safety, transparency and value alignment. In terms of the safety of AI systems, a radiologists' role is an open question, according to the authors.
"Autonomous and intelligent systems should be safe and secure throughout their operational lifetimes, and verifiably so when applicable and feasible," Kohli and Geis wrote. "Commonly used deep-learning approaches may incorporate trace elements of training data and then disclose those elements, either inadvertently or even intentionally."
Additionally, radiologists and physicians alike must be able to generate a transparent understanding of why an AI system could cause harm and be able to optimize the systems for the best patient outcomes.
Practices
Patient consent, user privacy, secondary data use, quality control processes and maintaining confidence from those who rely on radiologists and the American College of Radiology should be documented to "promote technical progress and protect the rights of individuals and groups", the authors explained—and at the very center of these practices must be trust in the radiologists themselves.
"In this rapidly changing field, trust involves both the belief that radiologists will not only do the right thing now but also advance our policies and conduct to account for changes in machines," Kohli and Geis wrote.