Imaging AI adds value to patient data but also puts it at heightened risk
Healthcare AI continues its march into clinical departments as well as imaging activities. In the process, it’s only multiplying break-and-entry points seductive to cybercriminals.
And the distributed vulnerabilities broaden the capacity of some specific threats—ransomware, malware, phishing scams—to baseline operations anywhere within the healthcare enterprise.
Cleveland Clinic researchers break down the scenario and offer suggestions for countermeasures in a paper current in JACR [1].
“As the adoption of imaging AI solutions grows, AI outputs may be critical to patient care,” they point out, “and cybersecurity threats causing downtime or delays can have significant patient-care implications.”
Neuroradiologist and imaging informaticist Chintan Shah, MD, and colleagues categorize the heightened risks as harmful to three key aspects of data security: confidentiality, integrity and availability. They call this the “CIA triad” framework. Here are excerpts of their analyses of each risk type.
1. DATA CONFIDENTIALITY
Development of strong, generalizable AI models requires enlisting multiple sites for data curation to increase the available data for building, training and validating a model.
Shah and co-authors note that a multiparty effort can work with either of two approaches. One method uses a central server to receive data from each site. The other leverages a distributed model to let each group train algorithms locally, sharing only model parameters for later integration. More:
“The central server approach requires data use/sharing agreements to ensure data is protected. Federated learning and split learning are variants of the distributed learning approach. Even though raw data is never directly shared between sites, it is possible for sensitive information to be recovered, particularly if countermeasures are not implemented.”
2. DATA INTEGRITY
Deployment of multiple data-generating AI products in the same practice increases the need for managing and integrating multiple data sources, with variations in data handling policies sometimes causing variability in data integrity as well.
Meanwhile the DICOM standard itself presents data integrity challenges, the authors warn. They note in particular the so-called “DICOM preamble,” a section at the beginning of each DICOM file that often contains data intended to enable access to images and metadata. More:
“Although to date there has been no publicly disclosed attack involving the DICOM preamble, incorporation of AI increases the diversity of image sources, some of which reside beyond a radiology practice’s direct control—and from which maliciously manipulated DICOM data exploiting the preamble could be generated and pushed to an otherwise secure PACS.”
3. DATA AVAILABILITY
While cybersecurity threats can act by limiting data availability, the reverse is also true in radiology: The expansion of image viewing and AI software to mobile applications has substantially increased data availability.
Noting that increased availability often has the effect of “widening the digital, physical and social engineering attack surfaces,” Shah et al. write:
“Data is now stored and processed in more locations, and commonly in the cloud if mobile viewing is a feature. The larger physical attack surface comes with the substantially increased number of mobile devices connected to the many AI solutions, each of which can be physically lost or stolen and may have variable vulnerabilities in a ‘bring your own device’ environment.”
AI offers myriad upsides for hospitals’ clinical and imaging operations—but it also brings with it numerous “incremental” cybersecurity risks, Shah et al. conclude.
Because of the double-edged nature of the bargain, the development and deployment of AI in and for provider organizations “should be done in coordination with qualified IT specialists in order to ensure such risks are adequately considered and mitigated.”