5 reasons radiology should replace ‘gotcha’-style peer review with ‘peer learning’

Scoring-based peer assessments of radiologists’ clinical performance should be phased out and replaced by a system of “peer learning.” For, if properly implemented, the latter approach would go beyond catching image-interpretation errors—and singling out those prone to making them—to incorporating peer feedback, encouraging shared learning and facilitating profession-wide improvement.

So contend the authors of a special report published online Sept. 27 in Radiology.

Lead author David Larson, MD, MBA, of Stanford and colleagues hold that the more-collaborative model, which is already used by many radiology practices in the U.S. and the U.K., would better align with the principles set forth by the Institute of Medicine in its 2015 report “Improving Diagnosis in Health Care.”

They point out that the most popular quality assurance (QA) model, the American College of Radiology’s “Radpeer” program, which assigns reviewing radiologists to numerically grade randomly selected radiology reports, is now almost 15 years old.

“[L]ike other QA programs focused on identifying suboptimal performers, it has had substantial limitations,” they write.

Larson and co-authors lay out a 6,500-word, copiously cross-referenced case for transitioning the specialty from peer review to peer learning as defined and described in their article.

“Abandoning a scoring-based peer review model precludes the use of peer review data to evaluate physician competence,” they write in formulating their final argument. “While some may be uncomfortable with this notion, we contend that this is an acceptable tradeoff.”

The authors distill the particulars of their proposal into five primary reasons for making the transition:  

  1. Peer review data have been shown to be biased, unreliable and not easily actionable. “Abandoning such data would not constitute a substantial loss,” Larson et al. write.
  2. Interpretive skill is just one element of physician competence. Other important aspects include professional behavior, continuous improvement efforts and adherence to professional guidelines, the authors note. “Metrics such as participation at conferences, case submissions and improvement initiatives completed can be tracked in place of discrepancy rates,” they add.  
  3. Organizational leaders can use other available data to evaluate for evidence of “outlier status” in professional practice. Data sources could include complaints from referring clinicians, anonymous trainee evaluations and sentinel events. “This approach shifts responsibility for determining competency from the radiologists’ community of peers to radiology practice leaders, presumably by following a predefined process,” the authors explain. “We acknowledge that this strategy is far from perfect, but it is likely as accurate as sampled peer review data, and it preserves the culture by freeing the professional community to focus purely on learning.”
  4. Retrospective case sampling is a weak method of assessing competence in any case; simulation and testing offer a much more objective method of assessing physician performance. “Radiology practices in competitive environments, whose viability depends on ensuring physician skill (such as in teleradiology practices), are beginning to implement their own form of objective skill assessment based on testing, which may constitute the next step in the evolution of radiologist competence assessment.”  
  5. The professional model—which stresses a given profession’s obligation to establish and enforce shared standards for quality practice—is based largely on trust that well-meaning skilled professionals will adhere to professional standards and that practice leaders will enforce those standards. “In our experience, when both practitioners and practice leaders are held accountable for fulfilling their professional roles according to their best judgment, a continuous learning approach functions far better in maintaining quality than any existing scoring-based peer review program.”

Larson and co-authors conclude by noting that the careful thinking behind “Improving Diagnosis in Health Care” has specific ramifications for radiology.

“When examining the themes of the IOM report, as well as the theories that underlie them, it is not surprising why the results of scoring-based peer review have been disappointing,” they write. “We believe that increased adoption of a peer-learning model will better enable individual radiologists and radiology practices to establish a culture of continuous learning, identify and improve system-based errors, and continuously improve diagnostic performance for the benefit of their patients and referring providers.

“We call on regulatory and certifying organizations to recognize this opportunity and to accept peer-learning programs as a means of fulfilling existing physician peer review requirements.”

Larson’s co-authors are Lane Donnelly, MD, of Texas Children’s Hospital, Daniel Podberesky, MD, of Nemours Children’s Health System in Orlando, Arnold Merrow, MD, of Cincinnati Children’s Hospital Medical Center, Richard Sharpe Jr., MD, of Kaiser Permanente, and Jonathan Kruskal, MD, PhD, of Beth Israel Deaconess Medical Center in Boston. 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The newly cleared offering, AutoChamber, was designed with opportunistic screening in mind. It can evaluate many different kinds of CT images, including those originally gathered to screen patients for lung cancer. 

Trimed Popup
Trimed Popup