Experts call for more structured reporting after study reveals wide variances in radiologist reads
A recent study uncovered a substantial amount of subjectivity and variability in how radiologists dictate reports, prompting prompted experts to call for more streamlined processes.
After analyzing more than 30,000 emergency radiological reports completed over a span of 6 months, researchers uncovered a number of factors that influence how radiologists express themselves in free-text format interpretations. Variations in the day of the week, time of interpretation, workload burden and even the reader’s gender all impacted radiologists’ written expressions.
“The length, structure and content of emergency radiological reports were significantly influenced by organizational, radiologist- and examination-related characteristics, highlighting the subjectivity and variability in the way radiologists express themselves during their clinical activity,” corresponding author Amandine Crombé, of IMADIS and the University of Bordeaux, and co-authors explained. “These findings advocate for more homogeneous practices in radiological reporting and stress the need to consider these influential features when developing models based on natural language processing.”
Information can be extracted from free text reports through natural language processing (NLP), which can enable automated categorization of large datasets used for the purposes of research, algorithm development, quality assurance and accreditation, among many other things. Radiological reports can be used to develop these NLP models, however, a lack of reporting uniformity caused by varying means of physician dictations makes this challenging. This is especially true in emergency settings, where many radiologists of differing experience levels and specialties continue to take call shifts beyond residency.
Researchers sought to assess how these differing backgrounds and radiologist characteristics affect the structure and content of emergency radiological free-text reports in an on-call setting as quantified by using NLP tools. To do this, they analyzed 30,227 MRI and CT reports from IMADIS Emergency Teleradiology (in France) that were completed from September 1, 2019, to February 28, 2020, by 165 radiologists. Each reader working for IMADIS had other primary radiological roles outside of the remote emergency reads.
The experts found that reports dictated on weekends, particularly toward the end of a radiologist’s shift and after many exams had already been interpreted, consisted of shorter, less comprehensive reports. As the number of reports a radiologist completed increased, positive and negative depictions decreased. Reports featuring doubt or ambiguity also increased with rising report numbers.
Exams labeled with greater urgency were linked to significantly longer, more detailed reports, as were MRI and vascular reads. This was also the case with female compared to male reporting and for radiologists reporting outside of their specialty.
“Identifying such influential factors could help us to improve the language used in radiological reports, their quality and to homogenize our practices,” the authors wrote. “We believe that these findings emphasize the subjectivity and variability in the way radiologists express themselves during their clinical activity and, consequently, stress the need to consider these influential features when developing NLP-based models.”
The detailed research can be viewed in the Journal of Digital Imaging.
More on radiology reporting:
Emergency providers, radiologists must communicate critical reports more effectively
Free-text radiology reports hold clues for managing incidental pancreatic lesions
Freely available algorithms ID venous thromboembolsims from radiology reports