Expanding RADPEER could better quality, safety in radiology
RADPEER, a product developed by the American College of Radiology (ACR) to better radiologists’ quality assessment and improvement through peer review, could be expanded and used to collect data on other aspects of quality and safety in radiology, according to a report published online May 16 by the Journal of the American College of Radiology.
RADPEER was originally designed to be a cost-effect process that enabled radiologist peer review to be performed during the routine interpretation of current images. However, RADPEER has undergone several iterations since the product was conceived. “The program opened in 2002, was initially offered to physician groups in 2003, developed an electronic version in 2005 (eRADPEER), revised the socring system in 2009, and first surveyed the RADPEER membership in 2010,” wrote the report’s lead author, Hani Abujudeh, MD, MBA, of the Massachusetts General Hospital in Boston, and colleagues.
In 2012, a web-based survey was distributed to 16,000 ACR member radiologists with the intention of understanding how to make RADPEER more relevant to its members. Responses were received from 1,589 people. More than three-fifths of practices used RADPEER. Of these practices, 85.2 percent had all radiologists in the practices participate in RADPEER. A total of 73.8 percent of the radiologists who responded to the survey said that they had been using the program for more than three years.
Most of the responding practices had set peer review targets. Of the responding physicians, 50 percent preferred absolute numbers of cases instead of percentages of cases.
The survey found many radiologist concerns regarding the peer review process, including the potential of discoverability of the data, the awkwardness of being graded, the possible ramifications of institutional politics with potential misuse of the data, the validity of the results, potential harm to the reputation of the involved radiologists, use of time and resources, and the possibility of punitive outcomes. The survey implies that anonymity could improve reporting of disagreements at 43 percent.
Almost half of the respondents reported that the peer review process is too time consuming and nearly a third struggle to see quality improvement as a legitimate outcome from the current systems of peer review.
“In addition to anonymizing the process, respondents generally felt that improvements in the integration of peer review into the PACS would be helpful and would encourage the use of peer review,” wrote Abujudeh and colleagues.
In terms of whether participation result in changes or improvements in group practices, there was no consensus among users. Forty-seven percent of respondents said their practice patterns had not changed, while 33.1 percent were unsure and 19.8 percent said their patterns had changed. Of the 132 respondents who said the peer review was useful, 52.3 percent observed improvements in their radiologic interpretations as a result of heightened awareness of quality oversight.
The program has been discussed at national quality meetings and committee members have offered the idea of expanding RADPEER. “The committee felt that RADPEER could be expanded and used as a means to collect data on other aspects of quality and safety in radiology. The committee also felt that there needs to be consideration to balance additional value against the added burden of submitting the data and its implications for workflow,” wrote the report’s authors.
RADPEER had a total of 1,147 groups participating in the program as of mid-November 2013, with a total number of 17,037 physicians. Currently, the median number of cases reviewed in RADPEER is 776 each year.