Google's latest large language model is poised to give ChatGPT a run for its money in imaging
Not to be outdone by OpenAI's ChatGPT, Google recently introduced the latest version of its own large language model (LLM), which could be particularly beneficial within the realm of medical imaging, according to CEO Sundar Pichai.
PaLM 2 is Google’s answer to OpenAI’s GPT-4, offering improved multilingual, reasoning and coding capabilities. Different versions of the LLM are available, but of particular interest to radiologists is Med-PaLM 2, which was developed using medical data to answer questions and offer insight on health-related topics.
According to Google Research, Med-PaLM 2 "generates accurate, helpful long-form answers to consumer health questions, as judged by panels of physicians and users." Google indicates that one of the main goals of Med-PaLM 2 is to “synthesize information like X-rays and mammograms to one day improve patient outcomes.”
Pichai recently elaborated on the many talents of PaLM 2 at Google’s annual I/O conference, telling the crowd that the latest LLM offerings “are stronger in logic and reasoning” than baseline models. Specifically, the Google CEO shared that Med-PaLM 2 has shown potential for "a 9x reduction in inaccurate reasoning as compared to the base model, approaching the performance of clinician experts answering identical questions."
A Google blog written by Zoubin Ghahramani, VP of Google DeepMind, states that Med-PaLM 2 was the first large language model to achieve “expert” results on U.S. Medical Licensing Exam-style questions. The hope is that the model will eventually be able to interpret information derived from medical imaging and that radiologists can use it as an assistive device to both read images and communicate results, thus improving patient outcomes.
The new LLM is not yet available to consumers. Google shared that later this summer a small group of cloud customers will gain access to Med-PaLM 2 to provide feedback on the its use.
To learn more, click here.