CDS studies generally show positive effects, but gaps in research remain
A meta-analysis of studies focused on the implementation of clinical decision support (CDS) for imaging found moderate evidence that integrating such tools with the EHR can improve appropriate imaging use and yield a small decrease in test ordering, but more research must be done, particularly on possible harms.
Published in the April 21 edition of Annals of Internal Medicine, the systematic review and meta-analysis also found some evidence that CDS interventions with a “hard stop” that prevented clinicians from overriding the system without outside consultation may increase CDS effectiveness.
Caroline Lubick Goldzweig, MD, MS, of Veterans Affairs West Los Angeles Healthcare Center, and colleagues conducted the analysis for the Veterans Health Administration Choosing Wisely Workgroup. After scanning English-language articles from PubMed from 1995 to September 2014 and searching for citation in Web of Science, the authors selected 23 studies. This included three randomized trials, seven time-series studies and 13 pre-post studies that assessed the effect of CDS on test ordering. Two of these studies, however, did not present sufficient data to be included in a quantitative analysis.
Of the studies that were included in the quantitative analysis, moderate-level evidence was presented that showed CDS improved appropriateness (effect size: -0.49) and reduced imaging use (effect size: -0.13).
Harms were rarely assessed, but Goldzweig and colleagues noted that possible pitfalls of CDS could include decreased ordering of appropriate tests and physician satisfaction. They noted one study that reported a CDS intervention featuring a “hard stop” likely contributed to a delay in care for four patients, prompting the local institutional review board to halt the study.
“Another study, excluded from our review because it assessed a pediatric population, surveyed physicians and found that most believed that [CDS] was ‘a nuisance’ and ‘not relevant to the complex or high risk patients they had to treat,’” wrote Goldzweig and colleagues.
Another gap in the research noted by the authors was the limited number of studies that reported on clinician training as part of implementation, a feature of only approximately one-third of the reviewed studies.
“This lack of reporting of context and implementation, which is common to many studies of health IT, limits readers' ability to draw conclusions about effectiveness and may perpetuate the belief that these kinds of interventions can be developed separate from the workflow of practicing clinicians and then simply ‘turned on’ with the expectation that clinicians will know how to use the intervention and use it correctly,” wrote Goldzweig and colleagues.