Voice Recognition: Plugged In to Savings

SpeechQ image courtesy of MedQuist
Healthcare spending in the United States has increased dramatically in the last decade and radiology has been singled out as a main culprit. As imaging volumes increase, reimbursement decreases and competition sharpens, radiology practices and departments must make more effective use of the data and technology available today. Voice recognition (VR) software solutions can help reduce costs, increase efficiency and improve patient care.

Compared to previous generations of VR technology, today’s systems are more accurate, integrate better with a variety of vendors’ PACS and have a marked reduction in the number of mouse clicks per case. All of this has led to greater clinical adoption and widespread use.

In 2002, Shands Hospital in Jacksonville, Fla., transitioned to PACS integrated with PowerScribe (Nuance) VR program. After a short time, the 16 radiologists and nearly two dozen residents were self-correcting reports. In 2007, it was time to upgrade the PACS because of growth in imaging volume—about 250,000 cases annually for the 700-bed facility.

In the process of shopping for a PACS, Arif S. Kidwai, MD, chief of the division of informatics, department of radiology at Shands, decided he liked RadWhere (Nuance), a structured reporting software. RadWhere had an updated version of PowerScribe embedded in it, but Kidwai was more interested in the workflow options and integration advantages. At the time, he thought all VR programs were about the same.

After go-live, Kidwai expected to hear compliments about the impressive workflow advantages, but for the first two weeks, no one talked about anything else except the dramatic improvement in VR accuracy.

The old VR system often made mistakes with single-syllable words and with some two-syllable words. Words like “and,” “the,” “of” and other prepositions had to be routinely corrected. Multiple-syllable words are generally easier for VR programs to recognize because there are fewer sound-alike options. With the new system, errors on small words decreased by nearly 90 percent. “When you’re self-reporting, that saves a lot of time,” Kidwai says.

RadWhere also reduced mouse-clicks per report. For Kidwai, one click in the PACS opens up the VR software with that case. If the case is normal, he can open a “normal” template, which dramatically reduces the amount of dictating and, later on, self-correcting. Another click and the report is signed, and one more closes it, sends it to the referring physician and opens the next case.

Kidwai especially likes having macros for “routine” abnormal statements, such as fatty liver descriptions or gallstones with signs of cholecystitis. These can be called out with a one or two word statement or pulled from a drop down menu. “If you use macros and other intelligent-type features built into the system, you perform faster than a transcriptionist 90 percent of the time,” he says.

Improving with use

When the Medical Center at Bowling Green, a 490-bed regional healthcare system in Kentucky, used transcriptionists, report turn-around times routinely averaged 24 hours. They were often fielding calls from referring physicians about reports, resulting in interruptions and redundancies. If the interpreting radiologist was not available, a second radiologist would have to re-evaluate the image—a practice everyone disliked, according to Eddie Scott, director of radiological services.

“The radiologists were not able to reach their full productivity potential because of our dictation and transcription process,” Scott says. In 2006, the radiologists chose SpeechQ (MedQuist). With SpeechQ, they can send the report to the transcriptionist or sign off on it. In addition, the system improves with every edit, adapting to the radiologists’ dictations.

Today, most reports are self-corrected by the radiologists and immediately signed and routed. Another benefit of SpeechQ is that transcriptionists do not have to type the entire report. In the first year of implementation, turn-around time for inpatient reports dropped to an average of 18 hours. In 2007, the average time dropped dramatically to 2.2 hours, and in 2008, it fell to 1.4 hours.

The perfect storm

At Scripps Health in San Diego, Kris Van Lom, MD, chair of the radiology department, experienced a transcriptionist perfect storm in 2006. Because of varying circumstances, he lost four of his six FTE transcriptionists in one week. The department fell dramatically behind in transcription and spent massive amounts of money on overtime. It took them months to recover.

Shortly afterward, Van Lom installed a VR program, but only in the smallest of three hospitals. “I wanted to test whether the radiologists would use it. Once I was convinced they would, I began to shop and compare products,” he says.

He chose SpeechQ and has found it to be 99.9 percent accurate. The downside to that, he says, is that on longer reports it’s difficult to catch those three or four mistakes. As the radiologists make more use of templates and macros, the errors are easier to catch because there is less new text to self-correct. “Every little thing we don’t have to dictate saves us five seconds. Multiply that by 350 reports a day and it’s a considerable savings in time,” he says.

The program also gives physicians the choice to self-correct the report or to send to the transcriptionist, Van Lom says. One mouse click signs it off, one mouse click sends it to the transcriptionist. Some colleagues were tentative about such an option, saying that radiologists would send everything to the transcriptionist. “Our contention was that the radiologists would rather complete the report as quickly as possible, and that was borne out,” he says. 

Van Lom found, however, that the longer, self-corrected reports—more than three or four entirely dictated sentences—contained errors. “We asked them to send their long reports to the transcriptionists,” he says.

The Future

Van Lom would like to see VR programs make more use of color-coding, much like Word document spell-checks, to catch errors more easily. For example, when auto text is called up, correct words would appear in one color, such as green, while words changed or inserted by radiologists might appear red. If the transcriptionist makes changes, those might appear yellow. “Such a system would greatly reduce self-correcting time,” he says.

Kidwai would like to see more intelligent interaction between the radiologist’s actions in PACS and the VR program. For example, when he measures a kidney lesion in PACS, and it is digitally stored in PACS, why, then, must it be verbalized to the VR program? The PACS should automatically transfer that data to the VR template, he says.

Another area for future development is to make use of image analysis intelligence so that if the radiologist is looking at a CT scan of the abdomen/pelvis and measures something at 2.5 cm in PACS, the VR program should then prompt: Is this the liver, the kidney? And when the radiologist makes a measurement in the left kidney, the VR program could then display a short list of potential diagnoses or prompt with the American College of Radiology recommended follow-up.

“Voice recognition moves patient information back and forth very well. But I’d like to see information move from the PACS to the voice recognition system,” Kidwai says. “If we have not made some step in that direction in five years, we’ve dropped the ball.”

Around the web

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.
 

The two companies aim to improve patient access to high-quality MRI scans by combining their artificial intelligence capabilities.