The Numbers Game: Bringing Quantitative Data to Trials & Practice

The art of image interpretation is giving way to objective scientific quantification. If imaging is going to truly meet the requirements of evidence-based, personalized medicine, it will take more than subjective—and occasionally ambiguous—scan reads. Fortunately, a number of organizations are already hard at work designing processes to help leverage the next generation of quantitative imaging data.

Consider the example of a PET scan for lung cancer. Currently, clinical evaluation of such a study is not done quantitatively. Physicians will view the scan looking for “hot” areas that indicate tumor uptake of F-fluorodeoxyglucose (FDG) radiotracer. After treatment, if the scan appears “cooler,” it indicates a response to the treatment.

“But the issue may be, if those two PET images were taken on different instruments…or the technologist didn’t perform the scan at the same time with the same settings, the physician may be wrong about the patient’s response if the difference [in FDG uptake] is not large,” says Paula M. Jacobs, PhD, associate director of the National Cancer Institute’s (NCI’s) Cancer Imaging Program. A major difference in tumor response can be seen so long as a scanner is functioning at all, but when trying to evaluate subtle responses early in therapy, physicians need assurances that the scans they are evaluating are comparable.

Variability also clouds the recommendations of radiologists. A recent study of CT and MRI scans for pancreatic lesions found wide radiologist-to-radiologist variation in the proportion of cases recommended for follow-up imaging, from 10.5 percent to 76.9 percent (Radiology 2011;259:136-141). The authors attributed more than 80 percent of the variation to the personal opinions of the individual radiologist. An accompanying editorial warned that “variation in reporting can lead to confusing recommendations to referring physicians on the same patient, eroding referrer confidence and jeopardizing referrals.”

The search for standards

Standardized equipment and standardized quantitative measurements can more accurately guide decisions on additional imaging or the course of chemotherapy or radiation treatments, but they also have important benefits in the clinical trial setting. The size of clinical trials can be reduced if uncertainty in physical data collection is limited, which in turn drives down costs and time required for the trial.

To this end, Jacobs and colleagues within the Cancer Imaging Program, including Robert J. Nordstrom, PhD, are overseeing the Quantitative Imaging Network (QIN), a collection of research teams located primarily within NCI’s designated cancer centers. They are tasked with improving quantitative methods for the prediction and/or measurement of tumor response to therapies in the multi-center clinical trial setting, with the ultimate goal of supporting the role of quantitative imaging in clinical decision making.

First initiated in 2008, QIN has seen a lot of recent growth, expanding to 19 sites. It represents the first consensus effort to bring the oncology, radiology, medical physics, and informatics communities together to develop tools and methods for the trial setting, explains Laurence Clarke, PhD, branch chief of imaging technology development for the Cancer Imaging Program.

The research challenge is complex and multifaceted, requiring innovative solutions throughout the entire imaging process. This starts with data collection and analysis for all commercially supported imaging modalities. Clarke says there is a lot of variability in collecting data from advanced techniques such as diffusion weighted imaging (DWI) and PET/CT. To remedy this, QIN teams are working with specialized phantoms to characterize PET/CT and MRI systems to reverse engineer a solution that can correct for variability across the major scanner vendors and different clinical trial sites. “This is really a software tool to actually reduce measurement uncertainty across each commercial platform, and each model of that platform,” says Clarke. The imaging industry has shown an interest to adopt this solution for the clinical trial setting.

QIN researchers at the University of Iowa, Iowa City, received a NCI (R01) grant to work with vendors to develop a software tool to harmonize PET/CT data that is expected to be available in the next year or two, and they are using the QIN network as a test bed for these methods. Meanwhile, at the University of Michigan, Ann Arbor, the QIN team is doing the same with DWI, and Clarke says the major vendors will eventually offer the software correction to bring variance in DWI data collection down to within approximately 3 percent.

To help ensure sites participating in a trial are on the same page, NCI started the Centers for Quantitative Imaging Excellence (CQIE) program to establish what it refers to as “trial ready” sites that are capable of conducting clinical trials involving a molecular or functional advanced imaging endpoint. The program disseminates standard operating procedures to qualify PET, MRI and CT scanners for volumetrics, dynamic contrast enhanced (DCE) MRI and dynamic and static PET, says Lalitha K. Shankar, MD, PhD, branch chief of clinical trials. In addition, the program has qualified scanners for these procedures at the NCI Comprehensive Cancer Centers. 

Shankar says the idea was to create the uniform operating procedures for scanner QA and QC as an easy-to-use “cookbook” for technologists. The procedures are available online, and have gained traction internationally and garnered interest from the pharmaceutical industry anxious to have better data on drug therapy response. CQIE is more of an educational process, with no sites being flatly rejected, though some may need to make adjustments and requalify. In the end, the result is a set of qualified centers ready to participate in oncology trials with advanced imaging components, if there’s a study of particular importance or interest.

“You don’t want to be qualifying your scanner after you hear that there’s a study your site may be participating in,” says Shankar.

After ironing out image acquisition, the next barrier to consensus solutions for quantitative imaging trials is data and tool sharing. Clarke says QIN has worked with the Radiological Society of North American (RSNA) to develop tools to permit de-identification of DICOM image data and rigorously search through DICOM headers to make sure there was no patient information before hosting data in The Cancer Imaging Archive from NCI. Archive requirements have become astronomically large and Clarke says neither NCI nor NIH have solved the problem of scalability yet for different research domains such as imaging and genomics.

Having standardized data to compare tools against is one important step, but the next is actually sharing those tools, adds Clarke. Metrology tools are those used to compare algorithms and evaluate performance of clinical decision support software.

To achieve this, Clarke says they will leverage Hubzero.org, an open source platform supporting scientific research developed by Purdue University in West Lafayette, Ind., with a grant from the National Science Foundation. NCI is exploring its use to share tools for all research domains, including genomics, and QIN is currently experimenting with sharing tools on the platform to drive consensus of methodology.

“Once we analyze the data and we’re confident the data and the software methods are working correctly, we will eventually make the data and metrology tools publically available for all researchers to use, both nationally and internationally, and to support NCI recently organized National Clinical Trial Network,” says Clarke.

For today and tomorrow

If QIN is focused on impacting clinical trials in the long term, RSNA’s Quantitative Imaging Biomarkers Alliance (QIBA) is its counterpart focused more on the here and now. Its mission is to improve the value and practicality of quantitative imaging biomarkers and move from imaging in clinical trials to more generalized use of mature biomarkers in clinical care.

QIBA collaborates with numerous stakeholders, including pharmaceutical companies, imaging device companies, informatics vendors, government agencies and clinicians. Its six technical committees—volumetric CT, FDG-PET, DCE-MRI, COPD-asthma, functional MRI, and ultrasound shear wave speed—develop QIBA Profiles that help vendors implement quantitative imaging in their products as well as define user-related procedures necessary for success.

Daniel C. Sullivan, MD, of Duke University Medical Center in Durham, N.C., serves as chair of RSNA’s QIBA Steering Committee. He says the alliance looks at the whole chain of imaging, from patient prep through interpretation, to determine the technical parameters needed to improve reproducibility.

There is a need to integrate quantitative information with decision support, but for this to work, terms must be machine-readable and not subjective. Some vendors have started looking at natural language processing as a workaround, but this is of limited effectiveness.

“What that reflects is the industry’s frustration in that it seems like a long time to them before radiologists are going to embrace including quantitative imaging info in reports,” says Sullivan.

The lack of quantitative data in reports is two-fold, explains Sullivan. Some radiologists are resistant simply because they don’t always believe the numbers. They understand there’s variability between scanners and believe they can do a better job subjectively evaluating scans.

The second major reason for reluctance is the impact on workflow. Current algorithms are not intuitive and don’t allow for single-click use. To measure volume, for example, a radiologist may have to outline a tumor on a stack of images before the computer can take over. Current RIS do not have easily integrated fields for quantitative data so it often must be dictated into free-text fields. The next-generation of tools will need to have single click analysis of tumors and codeable fields that automatically import quantitative data, as well as the ability to compare scans over the course of a patient’s treatment.
“People are only going to use these quantitative tools if it doesn’t interfere with what they’re already doing,” adds Jacobs.

The future for QIN will involve expanding and adding two sites in Canada. Centers are applying now to seek the competitive funding from NCI. Clarke notes the issue is so large that it will take an international effort; there is no point in only doing this work in the U.S. because technology vendors are international.
The work of QIN and QIBA also will begin to dovetail with advances in genomics. In the future, imaging phenotypes will be combined with genomic signatures in the context of clinical decision support, though this will require huge amounts of data.

For Sullivan, quantitative imaging is a necessary part of the transition of medicine from an art to a science, leaving less room for subjectivity. “There’s so much data available to treating physicians, they need to have it in a more objective form and more consistently from their various sources.” 

Evan Godt
Evan Godt, Writer

Evan joined TriMed in 2011, writing primarily for Health Imaging. Prior to diving into medical journalism, Evan worked for the Nine Network of Public Media in St. Louis. He also has worked in public relations and education. Evan studied journalism at the University of Missouri, with an emphasis on broadcast media.

Around the web

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.