Misuse of public imaging data is producing 'overly optimistic' results in machine learning research

Misuse of available public data could lead to biased results in research that analyzes the utility of machine learning in medical imaging. 

Such “off label” use happens when public data published for one task are used to train algorithms for a different function. New research published in Proceedings of the National Academy of Sciences analyzed how this happens, as well as the accompanying consequences. 

“This work reveals that such off-label usage could lead to biased, overly optimistic results of machine-learning algorithms,” lead author Efrat Shimron, from the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley, and co-authors disclosed. “The underlying cause is that public data are processed with hidden processing pipelines that alter the data features.” 

Some datasets freely available to the public include pre-processed, rather than raw, images. Consequently, they contain less data, which becomes problematic when researchers use these images to develop reconstruction algorithms.  

The researchers used two processing pipelines typical to open access databases to study their impact on three well-known MRI reconstruction algorithms (compressed sensing, dictionary learning, and DL) when applied to both raw and processed images. 

The experts explained that when using processed data, images produced by the algorithm were clearer and sharper and, in some cases, up to 48% better than images constructed from raw data. This can create biased results when algorithms are unknowingly trained on processed data.  

“Our main observation is that bias stems from the unintentional coupling of hidden data-processing pipelines with later retrospective subsampling experiments,” the authors wrote. “The data processing implicitly improves the inverse problem conditioning, and the retrospective subsampling enables the algorithms to benefit from that.” 

The authors suggested guidelines to help avoid such overinflated AI study results, including the important recommendation that data curators provide detailed descriptions of all their processing steps, among others. 

“We call for attention of researchers and reviewers: Data usage and pipeline adequacy should be considered carefully, reproducible research should be encouraged, and research transparency should be required,” the experts said. 

More on artificial intelligence in imaging:

Legal ramifications to consider when integrating AI into daily radiology practice

Why radiologist virtue is so important in the AI era: 6 pieces of advice

Radiogenomics could personalize cancer care, but experts are still hesitant to embrace the method

Transparent AI platform shows radiologists its decision-making blueprint for diagnosing breast cancer

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Around the web

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The newly cleared offering, AutoChamber, was designed with opportunistic screening in mind. It can evaluate many different kinds of CT images, including those originally gathered to screen patients for lung cancer. 

AI-enabled coronary plaque assessments deliver significant value, according to late-breaking data presented at TCT. These AI platforms have gained considerable momentum in recent months, receiving expanded Medicare coverage in addition to a new Category I CPT code.

Trimed Popup
Trimed Popup