AI accurately tells children’s age from hand x-rays

Stanford researchers have developed a deep-learning neural network model that can determine the bone age of children from a hand radiograph about as accurately as both an expert radiologist and an existing software package that uses a feature-extraction approach and has been cleared for clinical in use in Europe.

Reporting their work in Radiology, the study authors suggest their model might be applicable to other fairly simple image-interpretation tasks across radiology.

Lead author David Larson, MD, MBA, senior author Curtis Langlotz, MD, PhD, and colleagues trained and validated the model with more than 14,000 hand x-rays, along with the corresponding radiology reports, from two children’s hospitals.

They tested the model against the expert rads and the European software using two measures.

One used age estimates from 200 radiology reports plus those from three additional expert readers, and this served as the reference standard.

The other used around 1,400 exams in the publicly available Digital Hand Atlas, and the researchers compared these with published reports applying the European software.

They found the mean difference between bone age estimates of the model and of the reviewers was 0 years, with a root mean square of 0.63 years and a mean absolute difference of 0.50 years.

Further, the estimates of the model, the clinical report and the three reviewers were within the 95 percent limits of agreement.

“Our results suggest potential broad applicability of deep-learning models for a variety of diagnostic imaging tasks without requiring specialized subject matter knowledge or image-specific software engineering,” Larson and colleagues comment. “Specifically, machine learning models developed for other vision tasks … may also be generalized to tasks in the medical domain.”

Qualifying their results, the authors stress that automated assessment of bone age probably ranks among the easiest applications for deep learning in medical imaging.

“Although our results are encouraging for application of deep learning in medical images, they do not necessarily indicate how successful such applications will be when applied to more complex and nuanced imaging tasks,” they write.

Nevertheless, in a time when all eyes are peeled for the next big thing involving artificial intelligence—not least in radiology—the gist of their conclusion warrants attention:

“A deep learning-based automated software application with accuracy similar to that of a radiologist could be made available for clinical use.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The newly cleared offering, AutoChamber, was designed with opportunistic screening in mind. It can evaluate many different kinds of CT images, including those originally gathered to screen patients for lung cancer. 

Trimed Popup
Trimed Popup