RSNA 2018: How AI can help, but also hack into medical imaging
Utilizing artificial intelligence (AI) and the internet, radiology and medical imaging have greatly improved patient care and the transfer of medical records and images.
However, researchers from Switzerland have demonstrated this increased sense of connectivity can also lead to increased vulnerability and the. outside interference of medical images—ultimately putting patients at high-risk.
In hindsight, the research—to be presented Thursday, Nov. 28 at the annual meeting of the RSNA in Chicago—demonstrates how AI can not only be useful in clinical settings, but within the next five to 10 years could be weaponized to hack medical images if hospitals and hardware and software vendors don’t take appropriate precautionary actions.
Led by Anton S. Becker, MD, radiology resident at University Hospital Zurich and ETH Zurich in Switzerland, the researchers looked at the potential to tamper with mammograms using an AI algorithm.
"As doctors, it is our moral duty to first protect our patients from harm," Becker said in a prepared statement. "For example, as radiologists we are used to protecting patients from unnecessary radiation. When neural networks or other algorithms inevitably find their way into our clinical routine, we will need to learn how to protect our patients from any unwanted side effects of those as well."
For their study, the researchers trained an AI-based application called cycle-consistent generative adversarial network (CycleGAN) with 680 screening mammograms from 334 patients. The researchers wanted to determine if CycleGAN could insert or remove cancer-specific features into mammograms in a realistic manner.
The neural network then converted the images to show cancer in healthy control images and vice versa, converting healthy control images into cancerous ones.
Three board-certified radiologists reviewed the images and indicated whether they thought the images were modified or untampered. None of the radiologists could accurately distinguish between the two, according to the researchers.
"Neural networks, such as CycleGAN, are not only able to learn what breast cancer looks like, we have now shown that they can insert these learned characteristics into mammograms of healthy patients or remove cancerous lesions from the image and replace them with normal looking tissue,” Becker said.
Becker said in an interview with Health Imaging that this type of cyberattack should not concern patients now, however he anticipates that it will be feasible in clinical settings in the next five to ten years.
Overall, Becker said he hopes his team’s findings can help bring awareness to physicians and vendors so they may take precautionary actions to prevent such attacks, including strengthening IT departments or investing in more secure patient portal access software.