Staged cyberattack altering mammography images deceives both AI and radiologists
Researchers are questioning the safety of artificial intelligence models after a staged cyberattack altered the diagnosis-sensitive information on mammograms.
The study, published in Nature Communications this week, brings attention to the vulnerability that exists among AI tools. “Adversarial attacks” pose a risk because of their ability to alter image factors that could cause both AI and human readers to make an incorrect diagnosis.
“Under adversarial attacks, if a medical AI software makes a false diagnosis or prediction, it will lead to harmful consequences to patients, healthcare providers, and health insurances,” corresponding author Shandong Wu, PhD, with the Department of Radiology at the University of Pittsburgh, and co-authors cautioned.
One threat to AI’s reliability is what's called a generative adversarial network (GAN). Such models manipulate images in a way that might change how humans or AI can interpret them. Sometimes this occurs by inserting or removing portions of cancerous regions on images, which can cause a positive finding to appear negative and vice versa.
The researchers at the University of Pittsburgh used mammograms to build an algorithm for interpreting scans. They then developed their own GAN that proceeded to alter various aspects of the images, creating images that falsely mimicked both positive and negative findings.
They put their AI model, as well as five breast imaging radiologists, to the test after the GAN model manipulated the images. Readers were asked to differentiate between real and fake images. The AI model misinterpreted 69.1% of the images. While the human readers did perform better, their accuracy varied between 29%-71%.
The authors caution that the results of their study “pose an imperative need for continuing research on the medical AI model’s safety issues and for developing potential defensive solutions against adversarial attacks.”
You can view the research in Nature Communications.