New framework promotes ethical, secure data sharing for developing AI

A group of top Stanford University School of Medicine radiologists has unveiled a new framework promoting the ethical and responsible exchange and use of clinical data for developing artificial intelligence applications.

The pillars of this newly created platform rest on patient privacy, transparency and an obligation to not profit off of clinical data, the group explained March 24 in Radiology. In the paper, the researchers detailed how such an approach could open the door to widespread AI creation and improved medical image analysis.

"Now that we have electronic access to clinical data and the data processing tools, we can dramatically accelerate our ability to gain understanding and develop new applications that can benefit patients and populations," David B. Larson, MD, said in a statement. “But unsettled questions regarding the ethical use of the data often preclude the sharing of that information."

To solve such questions, Larson and colleagues made ethical stewardship the basis of their framework, ensuring that neither patients nor providers have complete control over clinical information.

And under their new structure, providers could not sell data for profit. Developers would only be able to make money from the activities performed by the AI, not the data itself.

Larson and colleagues’ framework supports releasing de-identified and aggregated clinical data for research and development, so long as those who are using the information reveal themselves and abide by ethical standards. The authors noted that patient consent wouldn’t be needed, and individuals wouldn’t’ “necessarily” have the option to opt out of having their data used, as long as their privacy is safeguarded.

"When used in this manner," the authors wrote, "clinical data are simply a conduit to viewing fundamental aspects of the human condition. It is not the data, but rather the underlying physical properties, phenomena and behaviors that they represent, that are of primary interest."

In cases where an individual’s name was accidentally available, such as on a piece of jewelry visible in a CT scan, the developer or other party using that information would be required to notify the patient and destroy the data.

According to the statement from RSNA, Larson and colleagues will be releasing their framework to the public for potential stakeholders to analyze.

"We hope this framework will contribute to more productive dialogue, both in the field of medicine and computer science, as well as with policymakers, as we work to thoughtfully translate ethical considerations into regulatory and legal requirements," Larson added.

The American College of Radiology recently made its own AI recommendations via comments on a draft memo published by the White House Office of Management and Budget. The college emphasized ethics, patient privacy and economics, in addition to other priorities.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.