New framework promotes ethical, secure data sharing for developing AI

A group of top Stanford University School of Medicine radiologists has unveiled a new framework promoting the ethical and responsible exchange and use of clinical data for developing artificial intelligence applications.

The pillars of this newly created platform rest on patient privacy, transparency and an obligation to not profit off of clinical data, the group explained March 24 in Radiology. In the paper, the researchers detailed how such an approach could open the door to widespread AI creation and improved medical image analysis.

"Now that we have electronic access to clinical data and the data processing tools, we can dramatically accelerate our ability to gain understanding and develop new applications that can benefit patients and populations," David B. Larson, MD, said in a statement. “But unsettled questions regarding the ethical use of the data often preclude the sharing of that information."

To solve such questions, Larson and colleagues made ethical stewardship the basis of their framework, ensuring that neither patients nor providers have complete control over clinical information.

And under their new structure, providers could not sell data for profit. Developers would only be able to make money from the activities performed by the AI, not the data itself.

Larson and colleagues’ framework supports releasing de-identified and aggregated clinical data for research and development, so long as those who are using the information reveal themselves and abide by ethical standards. The authors noted that patient consent wouldn’t be needed, and individuals wouldn’t’ “necessarily” have the option to opt out of having their data used, as long as their privacy is safeguarded.

"When used in this manner," the authors wrote, "clinical data are simply a conduit to viewing fundamental aspects of the human condition. It is not the data, but rather the underlying physical properties, phenomena and behaviors that they represent, that are of primary interest."

In cases where an individual’s name was accidentally available, such as on a piece of jewelry visible in a CT scan, the developer or other party using that information would be required to notify the patient and destroy the data.

According to the statement from RSNA, Larson and colleagues will be releasing their framework to the public for potential stakeholders to analyze.

"We hope this framework will contribute to more productive dialogue, both in the field of medicine and computer science, as well as with policymakers, as we work to thoughtfully translate ethical considerations into regulatory and legal requirements," Larson added.

The American College of Radiology recently made its own AI recommendations via comments on a draft memo published by the White House Office of Management and Budget. The college emphasized ethics, patient privacy and economics, in addition to other priorities.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Around the web

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The newly cleared offering, AutoChamber, was designed with opportunistic screening in mind. It can evaluate many different kinds of CT images, including those originally gathered to screen patients for lung cancer. 

AI-enabled coronary plaque assessments deliver significant value, according to late-breaking data presented at TCT. These AI platforms have gained considerable momentum in recent months, receiving expanded Medicare coverage in addition to a new Category I CPT code.

Trimed Popup
Trimed Popup