ChatGPT offers 'pretty amazing' recommendations on breast cancer screening, but oversight remains critical
Experts recently described ChatGPT’s utility in advising patients on breast cancer screening as “pretty amazing,” but warned that others should proceed with caution when seeking medical advice from the chatbot.
A team of experts with the University of Maryland School of Medicine (UMSOM) presented ChatGPT with a set of 25 questions relative to breast cancer screening recommendations to determine whether the program could reliably offer appropriate guidance. The questions covered everything from symptoms of breast cancer and who is most at risk, to costs associated with exams and how often women should undergo screening.
The same questions were presented to the chatbot three separate times to assess its consistency, which has been questioned by many of its users since its launch. Three radiologists who were fellowship-trained in mammography reviewed its responses for accuracy, consistency and appropriateness. They found that it provided adequate answers to 22 out of the 25 queries.
The findings were shared April 4 in Radiology.
“We found ChatGPT answered questions correctly about 88 percent of the time, which is pretty amazing,” noted corresponding author of the new paper Paul Yi , MD, assistant professor of diagnostic radiology and nuclear medicine at UMSOM and director of the UM Medical Intelligent Imaging Center (UM2ii).
Yi also suggested that the responses generated by ChatGPT were summarized into lay language that consumers could understand without difficulty.
However, despite the chatbot’s ability to answer the majority of questions appropriately, it did not earn a perfect score. It provided outdated information in response to one of the questions about mammograms and COVID vaccination, and on two others its responses varied significantly when prompted to answer the same question multiple times. Those questions related to breast cancer prevention and where someone could obtain a mammogram.
The chatbot also did not offer multiple sources to back up the data it provided, the authors noted.
“ChatGPT provided only one set of recommendations on breast cancer screening, issued from the American Cancer Society, but did not mention differing recommendations put out by the Centers for Disease Control and Prevention or the US Preventative Services Task Force,” said study lead author Hana Haver, MD, a radiology resident at University of Maryland Medical Center.
Overall, the authors suggested that ChatGPT has great potential to serve as a supplementary tool for providing patients with healthcare information, but physician oversight remains “critical” due to some of the inconsistent and occasionally inappropriate information it provides.
“Consumers should be aware that these are new, unproven technologies, and should still rely on their doctor, rather than ChatGPT, for advice,” Yi said.
The study abstract is available here.