Two image features point way to lossless CT compression
Important image features have been identified in efforts to design a computerized algorithm for compressing CT images without reducing quality to the naked eye, which potentially could relieve pressure on network and storage resources, according to a study published in the September issue of Radiology.
“Adaptive compression may allow us to use irreversible compression without concern about degradation and to save the system resources required for the storage and transmission of body CT images,” wrote Kil Joong Kim, PhD, of Seoul National University College of Medicine, Seoul, Korea, and colleagues.
Using JPEG2000 compression, the authors set about testing image features that could be used by the algorithm to establish a visually lossless threshold (VLT) for each image. VLT is the maximum level of compression that still maintains the compressed image as visually indistinguishable from the original, explained Kim and colleagues.
Because images of different parts of the body or different section thickness have varying degrees of perceptible artifact, it’s important to compress each CT image to its own unique VLT, making an adaptive, automatic compression algorithm a useful tool.
Kim and colleagues tested five image features for their ability to predict VLT. These features were standard deviation of image intensity, image entropy, relative percentage of low-frequency (LF) energy, variation in high frequency (HF) energy and visual complexity. One hundred images from 100 body CT studies in different patients were used for training and testing. Five radiologists independently determined the VLT of each image for JPEG2000 compression by using the QUEST algorithm, and then the images were randomly divided into sets of 50 for training and testing. This division was repeated 200 times, and each time a multiple linear regression model was constructed and tested regarding each of the five image features as independent variables.
Among the tested image features, variation in HF energy and visual complexity were the most promising in predicting VLTs, reported the authors. “The root mean square residuals between the VLTs determined by radiologists and those predicted by the multiple linear regression models constructed with variation in HF energy and visual complexity were 1.20 and 1.09, respectively, and intraclass correlation coefficients were 0.64 and 0.71, respectively.”
Between the two, visual complexity had better predictive performance, but Kim and colleagues noted that using this imaging feature in adaptive compression would be impractical because of relatively long computing times that approached three seconds per image. This limitation could be overcome with parallel processing techniques or by incorporating components of perceptual metrics into compression encoders, they added.
“Although the costs of storage and network resources have continued to decrease, there is still a demand for irreversible compression of [CT] images for long-term preservation and efficient transmission of data, especially between institutions at the regional or national level,” wrote the authors.