How do radiologists feel about utilizing GPT-4 in practice?
Large language models like OpenAI’s ChatGPT have garnered massive interest among the research community, but how do boots on the ground radiologists feel about deploying these models into clinical practice?
In recent years, there has been much talk of the potential of large language models to improve radiology workflows. So far, numerous studies have indicated LLMs hold great promise for streamlining some of the administrative burdens radiologists face, such as creating structured reports, communicating incidental findings, data mining, etc. Findings from these studies have been a bit of a mixed bag, though most have suggested that, with the right training, LLMs have significant potential in radiology settings.
This has led to many radiologists feeling cautiously optimistic about how LLMs can improve their day-to-day workflows, according to new data published in Insights Into Imaging.
“Healthcare is inundated with large amounts of free-text reports, such as radiological reports, which place a significant burden on radiologists. The inconsistencies in style and structure result in variability and complexity in radiological reports, hindering effective communication of information among various medical departments,” Shenghong Ju, with the Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology. “GPT-like technologies have shown great potential in structuring and correcting errors in free-text radiological reports and providing therapeutic suggestions through these reports.”
The team queried a group of more than 1,200 radiologists and trainees in China on their perceptions of GPT-like technologies with respect to their impact on clinical practice, training and education. The group also shared their opinions on the future of regulatory concerns and development trends related to LLMs in radiology.
The majority of respondents expressed optimism about the potential of LLMs in clinical practice, with two-thirds indicating high degrees of acceptance. Responses suggested participants were most hopeful about the potential for GPT-like models to improve reports and communications, to serve as decision support tools and enhance education. The group indicated their biggest concern related to the technology is how to achieve proper regulatory oversight.
“GPT-like technologies introduce a set of challenges that must be addressed to ensure their responsible and effective use, such as medical malpractice liability, privacy, and others. Legal regulation is a key challenge for LLMs," the authors noted. "Our results show that government regulations were strongly associated with tool acceptance, highlighting the need for developer-regulator collaboration.”
The group also highlighted an upward trend in providers favoring the deployment of LLMs into clinical practice compared to studies from prior years on the subject.
Learn more about what providers see as the pros and cons here.