Getting the Skinny on Advanced Visualization
Anomalous right coronary artery segmentation with curved planar reformation view. Image courtesy of Visage Imaging. |
For many diagnostic imaging practices, dedicated advanced visualization workstations have become legacy equipment. Moore’s Law, dedicated graphics processing boards reaching commodity prices due to the popularity of computer games, rapid advancements in 3D imaging software algorithms (also due in part to the gaming industry) and a robust DICOM standard for post-processing have resulted in a sea change toward thin-client implementation for medical imaging informatics.
Technology in practice
Radiology and cardiology are arguably the two medical specialties that have most warmly embraced advanced visualization software. The explosive growth of multi-detector CT equipment and high-tesla strength magnets for MRI systems has resulted in an “image overload” that threatens to overwhelm practices.
“The whole world has been changed by the advent of ultra-fast, multidetector cardiac CT,” says Robert S. Schwartz, MD, FACC, a cardiologist with the Minneapolis Heart Institute in Minneapolis. “It gives us extremely rapid, three-dimensional images which allow us to capture the entire beating heart.”
The downside to this achievement is that massive amounts of data are generated to deliver the 3D data sets from today’s multidetector CT systems, Schwartz notes. “One needs to be able to handle those massive amounts of data in a very efficient and facile way to make a diagnosis,” he observes.
Eliot L. Siegel, MD, professor and vice chairman of the University of Maryland School of Medicine department of diagnostic radiology and chief of radiology and nuclear medicine for the VA Maryland Healthcare System, observes that the current generation of CT systems presents an interpretation challenge for radiologists. While a cardiac CT angiography (CCTA) study can reach 2,000 or more images, other cardiac studies routinely generate from 6,000 to 8,000 images. “For the new generation of dual-source CT scanners, cardiac imaging studies can create as many as 15,000 images,” Siegel says.
It’s when this tidal wave of acquired image data shows up for interpretation that advanced viz technology, thick or thin, makes its greatest impact.
For post-processing of advanced imaging, cardiac and vascular CT studies can be particularly problematic, and that’s where a bottleneck often occurs. While data acquisitions are accomplished in 12 to 15 seconds, and patient table time requires roughly 5 to 10 minutes, post-processing can take anywhere from 20 to 60 minutes, depending on case complexity.
Acquisition abundance
Clinicians, for the most part, have little interest in doing advanced image processing, because practice volumes simply do not allow an interpreting physician the time needed to accomplish the task.
“We simply could not handle the huge amount of interpretative data being put out by these systems in an efficient manner,” Schwartz notes. “The workload we would have had by not having advanced visualization technology in place would have been simply staggering.”
Prior to the deployment of advanced visualization tools in his practice, the interpretation time for a CCTA procedure was anywhere between 20 and 30 minutes per exam.
“Since we’ve deployed our advanced visualization system, our read time has dropped down to 3 or 4 minutes for the more straightforward exams. More complex exams, of course, take a little longer,” he says.
A dedicated workstation, or thick client, moves a DICOM dataset from place A to B, either over a local-area network or a wide-area network such as the internet, where it is processed and then made available to other users. A thin-client model pushes the DICOM dataset to a centralized server where the processing takes place and allows it to be manipulated by users across a network. It’s in the more complex exams where legacy dedicated advanced visualization workstations can continue to play a role.
“In a busy radiology practice, the responsibility of who is best suited for 3D image manipulation is often unclear,” says Anthony Garcia, R.T. (R) (CT), 3D Lab/QC Supervisor for Tucson, Ariz.-based Radiology Ltd.
“Most radiologists usually have little time or patience for creating routine batch images or for learning the software to do so. Their time is better spent creating their product, the report. Generating a report in a timely manner is crucial, since it is the final product that provides financial support to a practice. Any activity that detracts from the reading of studies detracts from the bottom line,” he notes.
Paul J. Chang, MD, professor and vice-chairman of radiology informatics as well as medical director of pathology informatics at the University of Chicago School of Medicine, and medical director of enterprise imaging at the University of Chicago Hospitals, agrees that the interpreting clinician needs to assess the capabilities of the advanced visualization technology and budget his or her time accordingly. “There are some reconstructions that require more of my time to perform than is practical given the volume of imaging exams I must interpret,” he says. “In these cases, it is vastly more efficient and economical for a technologist skilled in utilization of the advanced visualization software to perform the image processing, and then have it sent to me for clinical interpretation.”
In these cases, a dedicated thick-client workstation can pick up these complex studies for processing. At Radiology Ltd., a private-practice diagnostic imaging group in with more than 40 radiologists and 400 technical, clerical and administrative personnel, thin-client and thick-client platforms are both in use.
According to Garcia, the practice uses a thin-client interface so the radiologist doesn’t have to load all the slice information to the reading station. And, since the reading workstation doesn’t have to expend processing power, on-call radiologists have the 3D lab’s resources at their home offices.
Some radiologists in the group prefer to perform their own advanced visualization post-processing; for these clinicians, Garcia has made available dedicated workstation-based applications. These tools, he explains, are legacy implementations of post-processing software in use prior to the practice implementing a dedicated 3D lab.
The bulk of DICOM data set reconstructions are carried out by the lab, which is staffed with technical assistants who are supervised by Garcia. Studies performed by technologists in the practice follow agreed upon protocols to deliver the maximum clinical benefit for image interpretation.
“Exams have been standardized throughout the company, and each study is built along exacting protocols,” he explains. “There are, of course, allowances for further in-depth evaluation of disease and physical aberrances.”
Garcia is responsible for providing the technical assistant’s education and maintaining the standards of the practice. In turn, he is supervised by a radiologist who monitors the lab’s output to ensure quality control.
“Often the responsibility for post-processing falls to the radiologic technologist, an obvious choice since he or she is well versed in anatomy, physiology and the needs of the practice,” he says. “However, next to the radiologist, the technologist is the busiest person in the practice, through whose efforts a productive schedule is maintained. Our technical assistant program provides an excellent option for more efficient and economical staffing that provides benefits for all professionals in a practice.”