Lack of diverse datasets in AI research puts patients at risk, experts suggest

New research published in PLOS Digital Health is calling attention to disparities in artificial intelligence that could inhibit its ability to be effectively deployed in clinical settings. 

Researchers analyzed more than 30,000 artificial intelligence clinical papers published in PubMed in 2019 and found that more than 50% of AI studies utilized databases from the U.S. or China, and that almost all the top 10 databases and author nationalities were from high income countries. Such homogenous datasets, the authors explained, can create research bias that hinders the clinical efficacy of AI applications. 

“The introduction of AI into healthcare comes with its own biases and disparities; it risks thrusting the world toward an exaggerated state of healthcare inequity,” William Greig Mitchell, of the Harvard TH Chan School of Public Health in Boston, Massachusetts, and co-authors wrote. “Repeatedly feeding models with relatively homogeneous data, suffering from a lack of diversity in terms of underlying patient populations and often curated from restricted clinical settings, can severely limit the generalizability of results and yield biased AI-based decisions.” 

To better explain how datasets can create unintended bias in research, authors used the example of diabetic retinopathy studies conducted on a cohort of patients from a small, urban area in the U.S. The results from the research, however promising they may be, might not carry over into other datasets, such as those from rural areas in Japan, for example. 

“Unequal access to the very factors that have facilitated the proliferation of AI in healthcare (e.g., readily available electronic health information and computer power) may be widening existing healthcare disparities and perpetuating inequities in who benefits most from such technological progress,” the authors explained. 

The researchers narrowed down their focus to 7,314 artificial intelligence studies out of more than 30,000. From those, they found that most datasets used to train AI models were from the U.S. (40.8%) or China (13.7%). This remained consistent when it came to the authors’ nationality, with 24% being from China and 18.4% from the U.S. First and last authors were predominantly male, at 74.1%, and the majority of studies’ clinical specialties were radiology, followed by pathology. 

When models trained on homogenous data are applied in clinical settings of demographically diverse populations, it poses risks to patients and research, the authors suggested. This could cause clinicians using the models to make treatment decisions based on data that are inappropriate for specific populations. 

“Although medicine stands to benefit immensely from publicly available anonymized data informing AI-based models, pervasive disparities in global datasets should be addressed. In the long-term, this will require the development of technological infrastructure in data-poor regions (i.e., cloud storage and computer speed),” the authors wrote. 

Related artificial intelligence content: 

Researchers cite safety concerns after uncovering 'harmful behavior' of fracture-detecting AI model

Misuse of public imaging data is producing 'overly optimistic' results in machine learning research

AI tool achieves excellent agreement for knee OA severity classification

Deep learning model triages brain MRIs for abnormalities to prioritize reads

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Around the web

The new technology shows early potential to make a significant impact on imaging workflows and patient care. 

Richard Heller III, MD, RSNA board member and senior VP of policy at Radiology Partners, offers an overview of policies in Congress that are directly impacting imaging.
 

The two companies aim to improve patient access to high-quality MRI scans by combining their artificial intelligence capabilities.