AI able to assess invasiveness of lung lesions to aid in surgery

Researchers in China have developed an artificial intelligence (AI) model that can mark the level of invasiveness of adenocarcinoma tumors in lung cancer patients, offering providers assistance as they develop surgical care strategies. The findings from the study on the development of the deep-learning model are published in Radiology. [1]

Lung adenocarcinomas—which show on CT scans as ground-glass nodules (GGNs)—are often difficult for radiologists to assess. The AI from the study is able to identify lung lesions as preinvasive, minimally invasive or invasive based on CT images, helping surgeons to time procedures for improved patient outcomes. 

“Compared with solid nodules, GGNs have a relatively indolent behavior. This may contribute to overdiagnosis and overtreatment, and ensuring appropriate management is important,” the study authors, led by Zhengsong Pan from Shanghai Jiaotong University, wrote. “The surgical management of GGNs is determined mainly according to the degree of invasiveness of the underlying pathology.” 

“For preinvasive lesions, including atypical adenomatous hyperplasia and adenocarcinoma in situ, the optimal timing of limited resection can be determined by means of follow-up, while minimally invasive adenocarcinoma and invasive adenocarcinoma require immediate surgical intervention,” the authors added. 

Multiple AI models were developed to address this challenge, all trained for the study on a large dataset of lung CT images taken from patients, encompassing a total of 4,929 nodules from 4,483 patients. The images were divided into a training set, one for validation, and an internal test set. An external test set was used for comparison, consisting of images of 361 GGNs from 281 patients. 

Various learning methods were tested, including AI models using the radionomics method, others dedicated strictly to deep-learning and ones that combined the two methodologies. A single model that leveraged a radionomics approach in conjunction with deep-learning yielded the best results, especially for predicting minimally invasive adenocarcinoma. The highest achieved accuracy was 85%, with 75% sensitivity and 89% specificity.

While no model was entirely accurate, the results show these algorithms can effectively and accurately identify the invasiveness of GGNs with enough specificity to aid clinicians who would otherwise rely entirely on their expertise. The AI can also reduce the overtreatment of GGNs that do not currently pose a significant risk of patients.

However, the study authors admit there are several limitations to their work, mainly that the team “focused solely on evaluating the performance of our model without comparing it directly to the clinical performance of radiologists.” Secondly, the models were tested on GGNs from Asian individuals exclusively, limiting the reliability of the data for patients with other racial backgrounds.

“[Additionally,] due to the unavailability of complete smoking data for the enrolled patients, this study lacked an analysis of the effect of smoking on model performance. Finally, we included patients with more than one nodule. The potential correlations among multiple nodules in a single patient may give rise to a ‘clustering effect,’ heightening the risk of overfitting to the distinctive characteristics of that specific patient,” the authors concluded. 

Read the full study at the link below.

Chad Van Alstin Health Imaging Health Exec

Chad is an award-winning writer and editor with over 15 years of experience working in media. He has a decade-long professional background in healthcare, working as a writer and in public relations.

Trimed Popup
Trimed Popup