Researchers ask: Is there publication bias in RSNA-presented abstracts?
Recent research has found studies with better diagnostic test accuracy get published quicker, but does that correlation apply to abstracts presented at the RSNA Annual Meeting?
A team led by researcher Lindsay A. Cherpak, BSc, MD, of the University of Ottawa’s Department of Radiology-Faculty of Medicine, Ontario, Canada, set out to analyze the amount of imaging diagnostic test accuracy (DTA) abstracts presented at RSNA 2011 and 2012 that were eventually published and if there was any association between higher DTA and full-text publication five years after the abstract was submitted.
“If higher DTA in imaging studies is associated with higher probability of full-text publication, this could lead to overestimates of DTA in systematic reviews and have negative downstream consequences on clinical decision-making and patient care,” the researchers noted in a May 28 study published in Radiology.
Cherpak and colleagues extracted the sensitivity and specificity from study abstracts to determine the Youden index (sensitivity + specificity -1). Logistic regression analysis assessed associations between higher diagnostic accuracy and full-text publication.
More than 7,900 abstracts were analyzed and 405 were included. Of those included, 288 (71%) were published within five years after submission. Logistic regression did not find an association between diagnostic accuracy and full-text publication.
“The high proportion of abstracts that reached full-text publication in our cohort may be related to the fact that the RSNA Annual Meeting is the largest imaging meeting in the world; therefore, it attracts high-level research that is more likely to be published,” Cherpak et al. wrote. “Still, the fact that more than a quarter of studies remain unpublished is worrying, as this represents an important source of research waste.”
Poster presentations, the team noted, and studies with prospective data collection were less likely to be published. The decreased likelihood of the latter to be published was described as “perplexing,” but the researchers suggested logistic challenges related to prospective data collection can hinder study completion and therefore publication.
In a related editorial, Julia R. Fielding, MD, with UT Southwestern Medical Center’s Department of Radiology in Dallas, noted there were many features of accepted abstracts that could not be assessed due to the researchers’ manual review process.
Fielding also went on to discuss how the increasing demands of clinical work make it harder to complete high-quality studies. And while journals such as Radiology have accepted the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines to improve research quality, many in the field have been slow to adopt such aides.
Consequentially, it has become harder to interpret studies, according to Fielding, and radiologists must weigh how to best spend their time.
“The discipline of radiology is changing. Research is more heavily based on data transfer, decision analysis, and the development of machine learning. High-quality studies are essential to maintain the vibrancy of our field,” Fielding wrote. “However, during this transition, it is crucially important to keep our most important skill, contribution to patient care, at the forefront of our education.”