AI program ChatGPT now has a published article in Radiology—is it any good?

Artificial intelligence (AI) can officially add medical writing to its list of skills included on its ever-growing resume.

“ChatGPT and the Future of Medical Writing” was published on Feb. 2 in Radiology. The article, submitted and edited by Som Biswas, MD, with the department of pediatric radiology at Le Bonheur Children's Hospital in Tennessee, highlights the benefits and inherent risks of utilizing AI in a medical publication setting, concluding that, overall, it could be “a powerful tool” used in the future of medical publishing—when used with caution, of course.

ChatGPT (Generative Pre-training Transformer) is a language AI model that can produce human-like text. Developed by OpenAI, the model was trained on a large dataset of text for the purpose of being able to produce its own similar text. It is often utilized in the field of natural language processing as a chatbot, but following the launch of ChatGPT in November 2022, AI authorship quickly gained momentum in the field of medical publications, while also taking a lot of heat from skeptics.

Though it does not seem Biswas is a skeptic, he notes several considerations that need to be made when using the tool.

In a section headed “Human Author’s Cautions,” Biswas highlights six potential issues with using AI as an author:

  1. Ethics. Biswas notes that AI cannot be held accountable for its content, but rather that human editors would likely be held responsible. Additionally, there is the matter of authenticity when things like letters of recommendation and personal statements are generated by AI.
  2. Legal issues. Biswas highlighted three specific areas that could be most at risk of legal hurdles with AI—copyright, compliance and medicolegal. There are currently no official policies pertaining to AI authorship in these realms.
  3. Innovation. Without frequent updates with new data, AI tools will produce material that becomes redundant and lacks engagement and creativity, Biswas suggested.
  4. Accuracy. Like humans, AI authors are subject to producing errors. However, unlike humans, ChatGPT does not currently provide assessments of its accuracy. Humans can self-edit, AI cannot.
  5. Bias. Bias is a well-documented issue across all AI applications. Due to the way algorithms are trained, AI tools are only as diverse as the datasets they are trained on. This, Biswas suggests, can amplify bias.
  6. Transparency. The author notes that users need to be made aware of the writing and text identification processes of the machines or programs they are using.

The ChatGPT article can be viewed here.

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She joined Innovate Healthcare in 2021 and has since put her unique expertise to use in her editorial role with Health Imaging.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The newly cleared offering, AutoChamber, was designed with opportunistic screening in mind. It can evaluate many different kinds of CT images, including those originally gathered to screen patients for lung cancer. 

Trimed Popup
Trimed Popup