Overview of the regulatory landscape of AI in radiology

As artificial intelligence (AI) continues to revolutionize radiology, the regulatory landscape surrounding its implementation is undergoing significant shifts. One of the biggest trends right now is how to ensure AI is not biased and how to better understand and run quality assurance testing on algorithms.

Nina Kottler, MD, FSIIM, MS, associate chief medical officer for clinical AI at Radiology Partners and an associate fellow at the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), recently spoke on these changes at the Radiological Society of North America (RSNA) 2024 meeting. She spoke with Health Imaging in a video interview and shed light on the intricacies of U.S. AI regulation and its implications for radiology practices and hospitals.

"We need to make sure that businesses are innovating. So, how do we make sure that we're minimizing regulation so that we don't stifle innovation? But on the other side is how do we make sure that patients are safe? And there are some people who are very scared of AI becoming the Terminator, so how do you balance that? It'll be interesting to see what happens in the Trump administration," explained Kottler.

She said there is a growing patchwork of state laws governing AI, and there were some recent regulations regarding prevention of bias added to Section 1557 of the Affordable Care Act (ACA). The biggest trend among them is ensuring the AI is operating as intended.

A patchwork of individual state laws on AI

Beyond the initial U.S. Food and Drug Administration (FDA) review and approval of clinical algorithms, AI is a different animal from traditional medical devices in that it can change over time due to changes in data inputs. This is referred to as "AI drift," which could introduce bias, or enhance bias already in the programing. While federal regulation of AI in healthcare remains a work in progress, Kottler said there has been an emergence of a of state laws, creating a challenging regulatory environment for AI vendors and healthcare providers.

"Even though there's a lot of concern about regulating AI and people are speaking about it, federal laws haven't come to fruition. So, what we see happening are a lot of laws coming out of the states50 states with their individual laws, while many states have multiple laws. There are literally hundreds of different pieces of regulation that are coming out from every state, and they all have different definitions. So, it's kind of like a crazy mosaic that we all have to manage," Kottler explained.

She said most of the regulation right now focuses on vendors. However, providers, including hospitals and radiology practices, are not exempt.

ACA Section 1557 brings a new era of AI accountability

A pivotal piece of federal regulation affecting providers using AI is Section 1557 of the Affordable Care Act (ACA). While this non-discrimination rule has long applied to healthcare, it expanded its scope in 2024 to include AI. This regulation mandates that AI systems used in healthcare must not discriminate based on factors like age, gender, race, ethnicity, disability, or national origin. Kottler emphasized three key actions that providers must take to comply with Section 1557 by the May 1, 2025 deadline:

  • Evaluate for bias: Understand the training data of the AI systems being used and assess whether they contain inherent biases.

  • Mitigate bias: Implement measures to address potential biases, including empowering radiologists to serve as the final decision-makers in clinical applications.

  • Document efforts: Establish and maintain thorough documentation of compliance processes and mitigation strategies.

"It's impossible to totally get rid of bias, but we can mitigate it at the point-of-care, so you need to make sure that you are evaluating and trying to mitigate against bias," she explained.

She said the radiologist end users of AI need to undergo training so they know why it is important for them to be making the final decision.

"There are biases inherent in these algorithms. What we do in our practice is we take a few extra steps. We validate an AI model in advance of deploying it. And when we do that validation against our own data to make this determination. Is it biased and where does it tend to make mistakes? We take that information and we bring it to the radiologist. Also, everything needs to be documented," Kottler explained.

She recommends radiologists or AI champions at hospitals or practices read up on what the requirements are by the May 1 deadline.

The role of quality assurance in radiology AI bias mitigation

Just like the need to do quality assurance testing on imaging systems to ensure they are operating correctly and calibrated, it turns out the same is also true for AI algorithms, which can shift over time due to variations in data inputs. The American College of Radiology (ACR) and other professional organizations are stepping up to assist healthcare providers in navigating these challenges. Initiatives like ACR’s Assess AI program provide resources for evaluating AI performance and understanding the nuances of training data.

Additionally, the Radiological Society of North America (RSNA) has released a comprehensive FAQ on Section 1557 that offers guidance for hospitals and radiology practices.

"The other part we have to think about when it comes to bias is it's not just about the patient population, it's also about your equipment. So, this includes anything that changes the data that keeps it out of distribution from what the training data was. You could get a new scanner, or you could have a different tech come in and they use a different protocol. Maybe you just got a new software upgrade on your scanner. Any of those things might actually have a bigger effect than some of the patient population things we have to consider," Kottler explained.

Additionally, AI for radiology really needs to be monitored and tested by a radiologist who is ultimately the end-user and has the knowledge and experience in reading images and reports to understand nuances that may represent AI bias, or AI drift.

"If you're talking about radiology AI, you need a radiologist that is consuming the AI to review it, because they're going to be less biased than another kind of physician or nurse practitioner. Then you need the AI to be transparent and explainable. And the last piece is continuous monitoring," Kottler said.

She said Radiology Partners pays her to spend time learning about AI and thinking of ways to assess AI models and implement them into radiology workflows. She said practices and healthcare systems need to take a similar approach. If they ask a radiologist to do their regular workload and also look at AI, she said there is no way people can spend the time required to become experts in AI and gain the deep understanding needed to help with things like QA and ensuring the organization meets regulatory requirements.

She added that there should also be feedback mechanisms when AI is integrated into the PACS workflow. These should allow users to easily click if the AI is correct with a true positive or false positive and provide additional information that can be used to better understand how the AI is working and what issues are being seen.

The Biden administration took proactive steps to address bias in AI through executive orders and Department of Health and Human Services (HHS) directives. However, the regulatory environment may shift depending on the political landscape in the coming years under the Trump administration.

Dave Fornell is a digital editor with Cardiovascular Business and Radiology Business magazines. He has been covering healthcare for more than 16 years.

Dave Fornell has covered healthcare for more than 17 years, with a focus in cardiology and radiology. Fornell is a 5-time winner of a Jesse H. Neal Award, the most prestigious editorial honors in the field of specialized journalism. The wins included best technical content, best use of social media and best COVID-19 coverage. Fornell was also a three-time Neal finalist for best range of work by a single author. He produces more than 100 editorial videos each year, most of them interviews with key opinion leaders in medicine. He also writes technical articles, covers key trends, conducts video hospital site visits, and is very involved with social media. E-mail: dfornell@innovatehealthcare.com

Around the web

The healthcare market analysis firm Signify Research released a list of predictions in radiology its analysts expect to see in 2025. 

Jessica Porembka, MD, of the breast imaging division at University of Texas Southwestern Medical Center, said an ultrasound-first strategy for these lesions in DBT is cost-effective and improves efficiency. 

Melissa Davis, MD, vice chair of medical informatics and associate professor at Yale University’s Department of Radiology and Biomedical Imaging, shares her findings from research on private equity market penetration.