In search of the Holy Grail: Outcomes metrics

It’s no secret that healthcare in the U.S. provides comparatively poor value. That’s not an opinion. It’s a fact, as costs are high relative to outcomes compared to other Western nations. What can the profession of radiology do to help reduce the former while simultaneously improving the latter?

John Nance, MD, Johns Hopkins Medicine, took up the question in a talk at the annual meeting of the Society of Imaging Informatics in Medicine in National Harbor, Md.

Defining value in healthcare as outcomes divided by the costs necessary to deliver them, Nance said that radiology has shown no problem acknowledging that it lacks meaningful outcomes measures. But recognizing a problem and solving it are two different things.

“We know that reimbursement is going to be tied to outcomes,” said Nance, who was a resident physician at Johns Hopkins at the time of the SIIM conference. “This means that, in the future, if we can’t measure it, not only can we not manage it, but it looks like we’re not going to get reimbursed for it, either. This has certainly put us in a precarious position as we move forward.”

Before spotlighting some of the different payment models that could support possible solutions, Nance reviewed—and briefly critiqued—the three major national systems applying metrics to outcomes:

  • The Healthcare Effectiveness Data and Information Set (HEDIS) started in the early ’90s and now used by more than 90% of America’s health plans to measure HMO performance and match patients with plans. “To put it bluntly, administrators are paid contingent to the HEDIS metrics,” Nance said. “They matter.” Out of 81 of these metrics, only three have anything to do with diagnostic imaging—screening for osteoporosis, screening for mammography and avoidance of advanced imaging for low back pain. “You’ll note that, of those three, none are related to outcomes,” Nance said, “and none of them are controlled by radiologists themselves.”
  • The Physician Quality Reporting System (PQRS), launched by the federal government in 2006 and now the largest pay-for-reporting initiative in the U.S., has 254 metrics. Excluding interventional radiology, only 13 of those 254 that have to do with diagnostic imaging, Nance said. “And again, out of those 13, none deals directly with patient outcomes. So in effect, none are measuring our value.”
  • The National Quality Forum (NQF), the multistate nonprofit that seeks to set and validate various quality metrics for healthcare, endorses 636 measures—just 15 of which deal specifically with diagnostic imaging. That’s 2.4%, despite the fact that imaging accounts for 14% of total healthcare costs, Nance pointed out. “Each NQF-endorsed metric has a steward. Usually it’s professional society or something similar. ACR is the steward of only one NQF-endorsed metric at this time and, again, none of the measures deals specifically with patient outcomes.”

So there you have your national quality value outcomes measures, said Nance. How is radiology responding?

Avoid red herrings

Radiology has traditionally measured its value, or at least its promise of quality, through credentialing. If you were board certified, that meant you were good enough to practice radiology, Nance stated. No longer will credentials be considered a true measure of value.

At the same time, most of the metrics proposed in the radiology literature—from number of safety or quality projects completed, to patient satisfaction surveys, to rates of peer-review agreement rates—look at processes rather than outcomes.

Nance noted that the American College of Radiology has advanced a number of more meaningful tools for, and approaches to, demonstrating quality, most notably Imaging 3.0, RadPeer and ACR Appropriateness Criteria. “But, importantly, there are no good suggestions on how we are going to tie these things to reimbursement,” he said. “Furthermore, the data on both RadPeer and ACR Appropriateness Criteria for actually improving outcomes is lacking at this point.”

Showing the shortcomings of other measures presently in place, Nance noted that structure measures, such as whether or not PACS is in place and how many nurses per patient are on staff in the ICU, sometimes do indeed correlate with outcomes. “However, particularly within the imaging department, these have not been shown to correlate as highly with outcomes.”

Another set of options—process measures focused on various aspects of clinical and business operations—are both ubiquitous and straightforward to track. But it’s easy to become sidetracked evaluating processes, without much improvement to show for the effort, because “people tend to gravitate toward measures that are more easily extractable [even though] they may not necessarily be the most appropriate measures to demonstrate full outcomes,” Nance said. “People start to teach toward the test or divert resources away from important places in order to meet their requisite measures.

“This leaves us with the most important quality measure: outcomes. Why are these so elusive?”

Successes to build on 

There are a number of reasons for the slipperiness, Nance said. For starters, stringent risk stratification must be put in place to account for variabilities in patient populations. Next, very large sample sizes and very long follow-up periods need to be examined in order to show differences in outcomes.

“It’s just very difficult to tease out, throughout the entire care episode, what value came from the imaging based on a patient’s outcome,” Nance said. “There are just too many other confounding factors, ranging from the patient’s pre-existing disease state to how the reporting clinicians treated the report that they got from imaging.”

That’s not to say that radiology can’t play in the outcomes sandbox. Nance pointed to large, randomized and controlled trials on such clinical procedures as coronary CT angiograms (CTA) in the emergency department for evaluation of chest pain. This has been shown to decrease time to discharge compared to standard treatment while maintaining equivalent outcomes.

Similarly, low-dose CT screening for lung cancer has been shown to decrease mortality by up to 20%. And a very common example, use of CT in the ED for suspected appendicitis, has been shown to both improve outcomes and decrease costs.

With these successes comes another caveat: Such large-scale studies cannot measure the quality or the value of an individual imaging provider. In order to tie outcomes to the performance of a given individual radiology practice, “you’d basically have to run a continuous randomized controlled trial—which, in addition to being completely impractical, would deny half the patients the care they need,” Nance said.

“So, this is where we’re left,” he added. “This is our challenge.”

Tackle-ready to-do list

While radiology is working to find better ways to measure and demonstrate its incalculable contributions to improving outcomes while reducing costs, Nance suggested that practitioners can focus more attention on three value-adds readily at hand:

  • Quality of communication. “We should be making sure that actionable information is given to the proper person at the proper time,” Nance said. “This might not be the referring provider at the time of the examination. It might be the patient’s primary care physician three months later for follow-up on some incidental finding in an exam that was ordered by a specialist.”
  • Care-management participation. “We can do a better job of measuring how imaging changes care management in various situations—not only the change in management itself, but also the time to diagnosis, time to initiation of treatment, time to discharge.”
  • Diagnostic accuracy. Going forward, peer-review data must reflect whether the process actually measures—as well as improves—diagnostic accuracy, Nance said.  

“Outcomes measures are the most important measures when we’re talking about true value,” Nance said, calling to mind a point he made earlier in the talk: “If radiologists continue to insist upon being paid in a fee-for-service system, we can naturally expect that people are going to decrease the amount of imaging that they order—possibly to the detriment of patient care.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.