Radiologists should ally with CAD, but not solely rely on it
CHICAGO, Nov. 28—Two radiologists at the 93rd annual meeting of the Radiological Society of North America defended the use of computer-assisted detection (CAD) despite consistent opposition, and despite the potential for false-positives in a lecture, “Image science from a perceptual point of view with CAD.”
Matthew Freedman, MD, of the department of oncology at the Lombardi Cancer at George Medical Center in Washington, D.C., said that CAD improved the detection of “things,” which are eventually deemed benign, malignant or actionable. Freedman gauged the effectiveness of CAD based on sensitivity, specificity, intra-observe variability and inter-observe variability.
He made the distinction that “CAD detects ‘things’ that look like they could be questionable masses or calcium but doesn’t classify or diagnose them.” If a radiologist defines these “things” as actionable, rather than malignant (not cancer), then specificity will increase. With the assistance of CAD, more benign and malignant “things” will be seen, so fewer ‘things’ will be missed so intra-observe variability and inter-observe variability will increase, according to Freedman. If benign “things” are frequent, specificity will decrease and cost of clinical evaluation of the “things” will increase.
Freedman examined the scrutiny that radiologists see things that are not there without CAD. Based on a series of clinical trials, he stated that both the radiologists and CAD point to false-positives on cancer-free films.
In one clinical trial, using a Riverain RapidScreen RS-2000 chest radiography CAD, 80 proven lung cancer cases, ranging in size from 9.5 -27mm, and 160 cancer free cases were examined by 15 radiologists. The researchers compared two reads by the radiologists to a single CAD interpretation. Overall, 21 percent more cancers detected with the CAD system.
In the study, radiologists disagreed on likelihood of cancer. For intra-observe variability, one radiologist with two reads with no CAD assistance displays a great deal of variability without assistance of CAD. For inter-observe variability, two radiologists with no CAD changed over the course of the two reads in their own diagnosis and in relation to the other radiologist. In the end, the diagnosis of 15 different radiologists produced readings dissimilar from each other and their previous reading.
According to Freedman, radiologists do not accept all true positive results. Radiologists also do not accept all CAD prompts regardless of size. All sizes of tumors in CAD identified 66, and radiologists alone identified 65 percent. With both the radiologist diagnosis with the assistance of CAD, detection is 84 percent sensitivity.
He believes that a new kind of technology will soon be integrated or available in CAD—the potential to diagnose small cancers that are more likely to be metastatic.
Freedman concluded by stating that if radiologists followed the prompts more often, then “we could improve the detection systems themselves.” He also suggested that the interaction with diagnostic systems should be further studied.
Michael J. Ulissey, MD, assistant professor of radiology at the University of Texas Southwestern Medical Center in Dallas, reinforced many of Freedman’s ideologies concerning CAD, but specifically examined the “Perceptual Effects of CAD in mammography” in his lecture.
Ulissey acknowledged that much of the perception relates to the density of the breast. “We have to tell the good white from the bad white in mammograms for detecting breast cancer,” which is more complicated in denser breasts.
He listed many of the intangible complications confronting correct diagnoses of radiologists. First, “the vagaries of Monday vs. Wednesday,” alluding to the fact that often times there is no clinical reason why radiologist misread images. He said that sometimes, “if you’re a radiologist, sometimes you notice the malignancy because it’s Monday, but not if it’s Wednesday.” He also pointed to fatigue and eye strain that can effect proper reading as well. Thirdly, he pointed to the commonly-promulgated “where’s Waldo effect, but, unlike Waldo, our images are always in black and white and Waldo is not on every page.” Ulissey also suggested that tunnel vision can affect a diagnosis – “you focus on the larger, obvious issues without paying attention to smaller, potentially more dangerous ones.” Finally, he said that distractions can also interfere with a radiologist’s diagnosis because tumors are rare on mammograms, and as humans, minds tends to wander during monotonous tasks, according to Ulissey.
He did acknowledge that CAD can make “benign, or stray, marks occasionally;” and sometimes CAD won’t mark suspicious masses or calcium, but the radiologists need to take its recommendations in stride, using their own judgment in concordance with the device.
Ulissey cited the Linda Warren-Burhenne Retrospective study of 1,083 mammograms that led to the diagnosis of breast cancer. The trial obtained the most recent studies prior to diagnosis. All the mammograms and priors were reviewed by panel of expert radiologists and also reviewed by CAD. The study found that 67 percent of radiologists detected the cancers, and 27 percent were deemed actionable (a true miss), while CAD correctly marked 77 percent of the cancers in the prior studies. Overall, 21 percent of the cancers could have been detected the year prior had CAD been used.
In a 2004 prospective study conducted by Ulissey and Timothy W. Freer, MD, of Women’s Diagnostic and Breast Health Center in Plano, Texas, examined 12,860 women presenting for screening mammography over a 12-month period. Overall, the assistance of CAD increased cancer detection 17 percent from stage 0 to stage 1 cancers, and did not change positive predictive value for biopsy at all
Ulissey did encourage his fellow radiologists to use the detection systems because “CAD can be our greatest ally.”
Matthew Freedman, MD, of the department of oncology at the Lombardi Cancer at George Medical Center in Washington, D.C., said that CAD improved the detection of “things,” which are eventually deemed benign, malignant or actionable. Freedman gauged the effectiveness of CAD based on sensitivity, specificity, intra-observe variability and inter-observe variability.
He made the distinction that “CAD detects ‘things’ that look like they could be questionable masses or calcium but doesn’t classify or diagnose them.” If a radiologist defines these “things” as actionable, rather than malignant (not cancer), then specificity will increase. With the assistance of CAD, more benign and malignant “things” will be seen, so fewer ‘things’ will be missed so intra-observe variability and inter-observe variability will increase, according to Freedman. If benign “things” are frequent, specificity will decrease and cost of clinical evaluation of the “things” will increase.
Freedman examined the scrutiny that radiologists see things that are not there without CAD. Based on a series of clinical trials, he stated that both the radiologists and CAD point to false-positives on cancer-free films.
In one clinical trial, using a Riverain RapidScreen RS-2000 chest radiography CAD, 80 proven lung cancer cases, ranging in size from 9.5 -27mm, and 160 cancer free cases were examined by 15 radiologists. The researchers compared two reads by the radiologists to a single CAD interpretation. Overall, 21 percent more cancers detected with the CAD system.
In the study, radiologists disagreed on likelihood of cancer. For intra-observe variability, one radiologist with two reads with no CAD assistance displays a great deal of variability without assistance of CAD. For inter-observe variability, two radiologists with no CAD changed over the course of the two reads in their own diagnosis and in relation to the other radiologist. In the end, the diagnosis of 15 different radiologists produced readings dissimilar from each other and their previous reading.
According to Freedman, radiologists do not accept all true positive results. Radiologists also do not accept all CAD prompts regardless of size. All sizes of tumors in CAD identified 66, and radiologists alone identified 65 percent. With both the radiologist diagnosis with the assistance of CAD, detection is 84 percent sensitivity.
He believes that a new kind of technology will soon be integrated or available in CAD—the potential to diagnose small cancers that are more likely to be metastatic.
Freedman concluded by stating that if radiologists followed the prompts more often, then “we could improve the detection systems themselves.” He also suggested that the interaction with diagnostic systems should be further studied.
Michael J. Ulissey, MD, assistant professor of radiology at the University of Texas Southwestern Medical Center in Dallas, reinforced many of Freedman’s ideologies concerning CAD, but specifically examined the “Perceptual Effects of CAD in mammography” in his lecture.
Ulissey acknowledged that much of the perception relates to the density of the breast. “We have to tell the good white from the bad white in mammograms for detecting breast cancer,” which is more complicated in denser breasts.
He listed many of the intangible complications confronting correct diagnoses of radiologists. First, “the vagaries of Monday vs. Wednesday,” alluding to the fact that often times there is no clinical reason why radiologist misread images. He said that sometimes, “if you’re a radiologist, sometimes you notice the malignancy because it’s Monday, but not if it’s Wednesday.” He also pointed to fatigue and eye strain that can effect proper reading as well. Thirdly, he pointed to the commonly-promulgated “where’s Waldo effect, but, unlike Waldo, our images are always in black and white and Waldo is not on every page.” Ulissey also suggested that tunnel vision can affect a diagnosis – “you focus on the larger, obvious issues without paying attention to smaller, potentially more dangerous ones.” Finally, he said that distractions can also interfere with a radiologist’s diagnosis because tumors are rare on mammograms, and as humans, minds tends to wander during monotonous tasks, according to Ulissey.
He did acknowledge that CAD can make “benign, or stray, marks occasionally;” and sometimes CAD won’t mark suspicious masses or calcium, but the radiologists need to take its recommendations in stride, using their own judgment in concordance with the device.
Ulissey cited the Linda Warren-Burhenne Retrospective study of 1,083 mammograms that led to the diagnosis of breast cancer. The trial obtained the most recent studies prior to diagnosis. All the mammograms and priors were reviewed by panel of expert radiologists and also reviewed by CAD. The study found that 67 percent of radiologists detected the cancers, and 27 percent were deemed actionable (a true miss), while CAD correctly marked 77 percent of the cancers in the prior studies. Overall, 21 percent of the cancers could have been detected the year prior had CAD been used.
In a 2004 prospective study conducted by Ulissey and Timothy W. Freer, MD, of Women’s Diagnostic and Breast Health Center in Plano, Texas, examined 12,860 women presenting for screening mammography over a 12-month period. Overall, the assistance of CAD increased cancer detection 17 percent from stage 0 to stage 1 cancers, and did not change positive predictive value for biopsy at all
Ulissey did encourage his fellow radiologists to use the detection systems because “CAD can be our greatest ally.”