Limiting clinical disagreements key to reducing variability in CT measurements
Researchers have identified why radiologists may produce different CT measurements of the same cancer lesions, sharing their findings Jan. 10 in Current Problems in Diagnostic Radiology.
A small group of South Carolina-based doctors created a peer benchmarking intervention tool to see if they could reduce the interobserver variability in CT measurements between radiologists. And while the tool was not successful, they did find that a clinician’s preference for selection of CT slice, start point and endpoint measurement were all highly influential in causing clinical disagreement.
The findings highlight areas future researchers can target to improve the accuracy and consistency in evaluating each patient’s response to cancer treatment, Ronald W. Gimbel, PhD, with Clemson University, and colleagues wrote.
“While the accuracy and consistency in the evaluation of treatment response are essential for reliable treatment management, a growing number of research studies have reported that the tumor size measurements using CT scans are subjected to interobserver variability,” they said.
“Inconsistency in measurements may increase risk for suboptimal interpretation of treatment response,” the group added later.
For their study, Gimbel et al. created their benchmarking intervention specific to each CT measurement, which presented and provided clinicians with information to influence their deviation from median measurements.
Thirteen board-certified radiologists reviewed 10 CT image sets of lung lesions and hepatic metastases over three sessions. Before making their final measurements, the experts were presented with the intervention tool to try and reduce assessments furthest from the median.
Overall, there was no statically significant change in deviating measurements as a result of implementing the peer benchmarking tool—a well-heeded lesson for researchers, the authors noted.
“Our findings reaffirm the notion that the availability of an intervention tool does not necessarily result in its immediate effect,” they wrote.
Despite this, Gimbel and colleagues hope future endeavors can utilize their findings to break down barriers and reduce variabilities.
“If future interventional efforts targeting measurement variability are to include a peer benchmarking intervention tool, researchers should consider how one's diagnostic behavior can be characterized (e.g., typically under-measure or over-measure) within the study design and how clinical disagreement among radiologists can be systematically measured and resolved in an interactive approach,” they concluded.