Experts question validation transparency of FDA approved AI devices

Many AI devices have already been integrated into clinical practice, but a new analysis questions whether certain validation processes could lead to algorithm biases among them.

The analysis, published in Clinical Radiology, sought to answer this question with their recent review of FDA cleared AI devices currently in use in medical practice. As of November 2021, there were 151 AI devices approved by the FDA for medical imaging [1]. While the authors of the new paper maintain these tools will undoubtedly benefit the field of radiology, they did call into question the process of external validation involved in their development and whether a lack of transparency in the process could deter clinicians from utilizing AI tools in the future.

“The clinical study design and make-up of the clinical validation dataset can impact the safety and effectiveness of the device and introduce potential biases into clinical care,” corresponding author Harrison X. Bai, MD, of Johns Hopkins and colleagues explained. “It is critical that these devices undergo thorough clinical validation to ensure it is generalizable to a diverse population and image acquisition landscape.” 

The researchers used the American College of Radiology Data Science Institute AI Central database to conduct their analysis. As of November 2021, they found that, of the 151 approved algorithms, 64.2% reported the use of clinical data to validate their device. However, only 4% of these included the study participants’ demographics and just 5.3% reported the specifications of the machines used. 

The authors suggested these low figures could lead consumers to question the device’s external validation and, consequently, their decision to implement AI into their own clinical practices. 

“Although the devices' purported use of clinical data is reassuring, it would be beneficial for all parties if the specific parameters of these clinical studies were publicly available,” the authors suggested, adding that this would help the companies identify any potential biases in their algorithms. 

The authors described patient demographics and specific study parameters in AI validation as crucial and called for greater transparency of validation processes in the future: 

The lack of transparency of validation data is an important area of concern that both device companies and regulatory agencies should address.” 

For more information on the analysis, click here.

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Trimed Popup
Trimed Popup