AI alters images to fool radiologists—it may be a target for cyberattacks

A new study found that machine learning networks can learn to alter images so they are indistinguishable from real ones. Researchers warned this may tempt criminals to use such platforms for cybersecurity attacks.

A team of experts trained a generative adversarial network (GAN) on mammography images to inject or remove suspicious features. At lower resolutions, radiologists could not identify the normal images from those that had been manipulated. At higher resolutions, however, they could detect modifications, but found “significantly” fewer cancers.

Anton S. Becker, University Hospital of Zurich’s Institute of Diagnostic and Interventional Radiology in Switzerland maintained the method is highly limited, but with increased computing power GANs could be used as a “cyber-weapon in the near future.”

“All modalities in a modern medical imaging department rely heavily on computers and networks, making them a prime target for cyber-attacks,” the authors wrote. “As machine learning or artificial intelligence (AI) algorithms will increasingly be used in the clinical routine, whether to reduce the radiation burden by reconstructing images from low-dose raw data , optimal patient positioning, or help diagnose diseases, their widespread implantation would also render them attractive targets for attacks.”

GANs are a subset of deep learning algorithms that pit two neural networks against one another. One manipulates sample images and the second distinguishes between real and manipulated samples. The goal of this study was to train two GANs to input or remove suspicious features and see if radiologists could detect these attacks.

“Most advanced ML algorithms are fundamentally opaque and as they, inevitably, find their way onto medical imaging devices and clinical workstations, we need to be aware that they may also be used to manipulate raw data and enable new ways of cyber-attacks, possibly harming patients and disrupting clinical imaging service,” the researchers added.

Becker and colleagues trained two CycleGANS on data from two public datasets, selecting 680 images with and without lesions. An internal dataset of 302 cancers and 590 controls was used for testing. Three radiologists read the modified and original images in two formats—low (256 × 256 pixels) and high resolution (512 x 408 pixels)—rating the presence of suspicious lesions on a 1-5 scale along with the likelihood they thought an image was manipulated.

Their overall performance wasn’t impacted much at the lower resolution (AUC of 0.70 vs. 0.76), but one reader detected fewer cancers (0.85 vs. 0.63). At the higher resolution, each radiologists showed “significantly” lower cancer detection rates, scoring an AUC of 0.37 compared to 0.80. They all were able to detect the modified images, largely due to artifact visibility.

“Our results indicate that while GAN’s can learn the appearance of suspicious lesions, the modification of images is currently limited by the introduction of artifacts, and the size of the images is limited by technical memory constraints,” the authors concluded. “Nevertheless, this matter deserves further study in order to shield future devices and software from AI-mediated attacks.”

Read the entire study in the European Journal of Radiology.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.