JDI: Voice recognition is double-edged sword
While voice recognition dictation systems hold many benefits, the systems are a double-edged sword because of their many pitfalls, including high error rates, according to research published in the August edition of the Journal of Digital Imaging.
Chian A. Chang, PhD, from the Southern Health Department of Diagnostic Imaging in Melbourne, Australia, and colleagues sought to ascertain the error rates of using a voice recognition dictation system deployed at Southern Health.
Fifty random finalized reports from 19 radiologists obtained between June 2008 and November 2008 were analyzed for errors in categories including wrong word substitution, deletion, punctuation, other and nonsense phrase. Divided into computed radiography (CR) and non-CR report categories, errors were split into two categories: significant but not likely to alter patient management and very significant with the meaning of the report affected, thus potentially affecting patient management.
Three hundred and seventy-nine finalized CR reports and 631 non-CR reports were examined. According to researchers, 2 percent contained nonsense phrases while 36 percent in the non-CR group had errors, and out of these, 5 percent contained nonsense phrases.
“A relatively low rate of 6 percent of our CR reports had errors,” the authors wrote. “What is more significant is the high … rate for non-CR reports with errors. We have also found a considerable variation between radiologists in their error rates. It is also likely that some radiologists are unaware of the relatively high error rates that occur when using a voice-recognition dictating system.
“We hope that these findings result in an increase in awareness and reduced error rates (especially type B errors) in our efforts to find a balance between quality and speed of reports generated at our institution,” the authors concluded.
Chian A. Chang, PhD, from the Southern Health Department of Diagnostic Imaging in Melbourne, Australia, and colleagues sought to ascertain the error rates of using a voice recognition dictation system deployed at Southern Health.
Fifty random finalized reports from 19 radiologists obtained between June 2008 and November 2008 were analyzed for errors in categories including wrong word substitution, deletion, punctuation, other and nonsense phrase. Divided into computed radiography (CR) and non-CR report categories, errors were split into two categories: significant but not likely to alter patient management and very significant with the meaning of the report affected, thus potentially affecting patient management.
Three hundred and seventy-nine finalized CR reports and 631 non-CR reports were examined. According to researchers, 2 percent contained nonsense phrases while 36 percent in the non-CR group had errors, and out of these, 5 percent contained nonsense phrases.
“A relatively low rate of 6 percent of our CR reports had errors,” the authors wrote. “What is more significant is the high … rate for non-CR reports with errors. We have also found a considerable variation between radiologists in their error rates. It is also likely that some radiologists are unaware of the relatively high error rates that occur when using a voice-recognition dictating system.
“We hope that these findings result in an increase in awareness and reduced error rates (especially type B errors) in our efforts to find a balance between quality and speed of reports generated at our institution,” the authors concluded.