5 tips for running a technologist-focused QI project in radiology
A tech-centric project aimed at improving the quality of radiographic images in the radiology department of a pediatric teaching hospital has succeeded in cutting technologists’ collective error rate from 2.7 percent in the project’s first three months to 0.9 percent in the final six months.
Meanwhile the proportion of individual techs with error rates above 3 percent plummeted from 28 percent to 5 percent over the same period.
The institution was Children’s Hospital Los Angeles, which is affiliated with the Keck School of Medicine of USC, and the team behind the numbers shares its takeaways in a case study published online May 24 in the Journal of the American College of Radiology.
Confounding the improvements in error rates—the most common types of which were identified early on as radiation exposure, patient positioning, field of view, technique and overlying artifact—retake rates actually rose a percentage point, from 7 percent to 8 percent over the course of the 17-month project (August 2015 to December 2016).
In introducing their work, resident physician Justin Glavis-Bloom, MD, radiologist Amit Sura, MD, MBA, and Ruth Rizzo, BA, note that digital-era work modes tend to physically separate radiologists from technologists. The result is often a falloff in communication on image quality.
“Radiologist feedback is essential, as technologists and radiologists may differ in what defines adequate diagnostic image quality,” they write. “Efforts to improve quality must address both the final image, which affects the ability of the radiologist to interpret the study, as well as the number of retakes performed by the technologist.”
Children’s Hospital Los Angeles (CHLA) employs around 30 radiologic technologists. They work with 10 or so pediatric radiologists on approximately 7,000 radiographs per month.
For the QI project, the team developed a standardized template to record quality errors during report dictation, the authors explain. The full radiology department reviewed errors and summary data analytics during regular departmental meetings, then created a checklist for techs to follow, posting it on all radiography units within the department.
After calculating per-technologist error rates, project leaders had the technologist supervisor review each problematic image with the acquiring technologist and offer suggestions for improvement. The promise of end-of-year gift cards served as an incentive for performance improvement.
The decreases in error rates at CHLA validate the program’s approach, and the authors offer five points of learning to radiology departments considering similar QI projects:
1. Analyze preliminary data from a pilot phase. At CHLA, this period was the first three months of the project. The insights gleaned informed the creation of the department’s targeted checklist flagging common errors, the authors note.
2. Track errors at the level of the individual technologist. At CHLA, ongoing feedback tied to specific images with errors “enabled continuous improvement, which was evident in decreased error rates across our technologist staff,” the authors write.
3. Facilitate radiologist participation. “It is essential to minimize disruption to workflow, incorporating interventions into existing dictation systems and processes as much as possible,” they explain.
4. Audit participation. “Of 10 radiologists who remained at our institution throughout the intervention, only six participated consistently, representing 55 percent of radiographs.” This was vexing to the study authors, not least because they’d incorporated their standardized template into the dictation software in order to minimize workflow disruption.
5. Regularly track image retake rates. Such tracking is crucial to ensure that, over time, improvements in one domain are not offset by worsenings in another—and to protect patients from increased radiation exposure, the authors emphasize.
Glavis-Bloom and colleagues cite prior studies showing significant variation in technologist performance and an association between improved final image quality and higher retake rates. “When surveyed, technologists report rarely receiving feedback regarding image quality and recognize the need for improved education,” they write.
“Numerous attempts have been made to create ancillary processes for radiologists to record quality errors,” the authors write. “Prior studies have reported variations in radiologist participation and error detection rates that may be falsely low. Ideally, data should be captured within the radiologist’s report to minimize interruption to workflow and should be a part of a continuous improvement effort with individualized feedback and coaching for technologists.”