Who will be liable in the coming AI age? 4 things for radiologists to know
Artificial intelligence solutions are gaining steam across imaging practices and radiologists must familiarize themselves with the current legal landscape as the technology continues to evolve.
That’s according to a new perspective piece published by Harvard law and radiology experts April 9 in Skeletal Radiology. As it stands, there is very little legal precedent involving medical imaging AI, and radiologists, their practices and developers all face different risks, the group noted.
Taking both a legal and clinical perspective, the pair outlined key fundamentals of AI liability as they relate to musculoskeletal imaging.
1. Radiologists’ general negligence
In negligence cases, which typically cover medical malpractice, the plaintiff must demonstrate four key occurrences, including a duty of care and breach of duty, H. Benjamin Harvey, MD, JD, with Massachusetts General Hospital and Harvard Medical School, and Vrushab Gowda, JD, of Harvard Law School, both explained.
Courts will look to standards outlined by professional societies, community practices and departmental protocols. This makes guidelines, user training and appropriate AI integration “critical,” the authors noted.
At the same time, under causation theory, rads should know they likely face a greater malpractice risk if they miss findings as a result of relying upon AI, rather than if they decide to override an ultimately accurate diagnosis with their own “erroneous” judgment.
2. Informed consent a thorny issue
There are two key questions to ask, the pair explained: should rads inform patients they are using AI, and if so, what exactly should they disclose? Both relate to issues of liability.
While these concerns don’t play much of a role in current applications, this may change if AI becomes fully autonomous, the authors noted. But this is unlikely to happen any time soon, they added.
Informed consent, however, will likely come down to a provider-based or patient-based standard.
“In reality, the distinction between the two is hazy,” Harvey and Gowda explained. “The rub of the matter is that radiologists should be cognizant of their peers’ disclosures and anticipate the sorts of information patients would find important when deploying AI.”
3. Radiology groups face risks
If a radiologist relies on a tool that produces incorrect and damaging outcomes, patients may choose to sue the hospital or radiology practice instead of the individual physician. This opens healthcare systems up to “vicarious liability,” the authors noted.
In this case, a mistake by the radiologist would implicate those at the top of the organization. Again, this underscores the need for strictly written practice/departmental protocols, Harvey et al. added.
4. Developers may be at fault
If a startup’s bone imaging tool mislabels a finding and doesn’t recommend a biopsy to a patient who needs it, the developer may also be liable.
This is a legal grey area right now, the authors noted, but may ultimately fall under a strict liability framework. If it’s proven that a manufacturing defect was at fault, the plaintiff would likely win. But this is difficult to show, the pair said.
Similarly, negligence principles look at the features of the healthcare enterprise and not the product. It’s unclear how courts will handle these situations and is an area radiologists should keep an eye on.
All in all, the pair said radiology advocates should be involved in developing strict imaging AI standards.
“Looking forward, imaging departments should articulate clear protocols for their use, to include procedures in the event of human-AI discrepancy,” the researchers explained. “They can be aided by ACR and SSR-validated guidelines, training programs, and model practices for deploying AI technologies.”
You can read much more from the authors here.