AI program ChatGPT now has a published article in Radiology—is it any good?

Artificial intelligence (AI) can officially add medical writing to its list of skills included on its ever-growing resume.

“ChatGPT and the Future of Medical Writing” was published on Feb. 2 in Radiology. The article, submitted and edited by Som Biswas, MD, with the department of pediatric radiology at Le Bonheur Children's Hospital in Tennessee, highlights the benefits and inherent risks of utilizing AI in a medical publication setting, concluding that, overall, it could be “a powerful tool” used in the future of medical publishing—when used with caution, of course.

ChatGPT (Generative Pre-training Transformer) is a language AI model that can produce human-like text. Developed by OpenAI, the model was trained on a large dataset of text for the purpose of being able to produce its own similar text. It is often utilized in the field of natural language processing as a chatbot, but following the launch of ChatGPT in November 2022, AI authorship quickly gained momentum in the field of medical publications, while also taking a lot of heat from skeptics.

Though it does not seem Biswas is a skeptic, he notes several considerations that need to be made when using the tool.

In a section headed “Human Author’s Cautions,” Biswas highlights six potential issues with using AI as an author:

  1. Ethics. Biswas notes that AI cannot be held accountable for its content, but rather that human editors would likely be held responsible. Additionally, there is the matter of authenticity when things like letters of recommendation and personal statements are generated by AI.
  2. Legal issues. Biswas highlighted three specific areas that could be most at risk of legal hurdles with AI—copyright, compliance and medicolegal. There are currently no official policies pertaining to AI authorship in these realms.
  3. Innovation. Without frequent updates with new data, AI tools will produce material that becomes redundant and lacks engagement and creativity, Biswas suggested.
  4. Accuracy. Like humans, AI authors are subject to producing errors. However, unlike humans, ChatGPT does not currently provide assessments of its accuracy. Humans can self-edit, AI cannot.
  5. Bias. Bias is a well-documented issue across all AI applications. Due to the way algorithms are trained, AI tools are only as diverse as the datasets they are trained on. This, Biswas suggests, can amplify bias.
  6. Transparency. The author notes that users need to be made aware of the writing and text identification processes of the machines or programs they are using.

The ChatGPT article can be viewed here.

Hannah murhphy headshot

In addition to her background in journalism, Hannah also has patient-facing experience in clinical settings, having spent more than 12 years working as a registered rad tech. She began covering the medical imaging industry for Innovate Healthcare in 2021.

Around the web

A total of 16 cardiology practices from 12 states settled with the DOJ to resolve allegations they overbilled Medicare for imaging agents used to diagnose cardiovascular disease. 

CCTA is being utilized more and more for the diagnosis and management of suspected coronary artery disease. An international group of specialists shared their perspective on this ongoing trend.

The new technology shows early potential to make a significant impact on imaging workflows and patient care.