Positive predictive value often overestimated

Positive predictive value (PPV) is commonly overestimated and as a result, medical education must deliver the statistical skills needed to accurately incorporate the increasing diversity of diagnostic options into clinical care, according to a study published in the June issue of JAMA Internal Medicine.

Lead author Arjun K. Manrai, AB, of Harvard-MIT Health Sciences and Technology in Boston, and colleagues replicated a study performed in 1978 by Cassccells et al that revealed a majority of physicians, house officers and students overestimated the PPV of a laboratory test results using prevalence and false positive rate. “Understanding PPV is particularly important when screening for unlikely conditions, where even nominally sensitive and specific tests can be diagnostically uninformative,” wrote Manrai and colleagues.

The authors asked the same convenience sample of people the following question during their study: “If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?” The study included 24 attending physicians, 26 house officers, 10 medical students and one retired physician from a wide variety of clinical specialties.

Assuming a perfectly sensitive test, the researchers calculated the correct response as 1.96 percent and determined that responses including “2%,” “1.96%,” and “<2%” would be correct. Results revealed that approximately three-quarters of respondents answered the question incorrectly. Fourteen of the 61 respondents gave a correct response that was not significantly different from the 11 of 60 correct in the Casscells study. In both, the most common answer was “95%,” which was given by 27 of 61 respondents in the current study and 27 of 60 in the Casscells study. A range of answers between “0.005%” to “96%” was collected during the present study, with a median of 66 percent. In explanations of their answers, participants often knew how to compute PPV but incorrectly accounted for prevalence.

“We advocate increased training on evaluating diagnostics in general,” wrote the study’s authors. “Specifically, we favor revising premedical education standards to incorporate training in statistics in favor of calculus, which is seldom used in clinical practice. In addition, the practical applicability of medical statistics should be demonstrated throughout the continuum of medical training—not just medical school,” they concluded.

Around the web

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.