MedCity Influencers, Physicians

Evaluate Us, Please: How Transparent Physician Ratings Improve Performance, Outcomes

Doctors are competitive by nature, especially when they are compared against their peers. If they can fully trust how they are evaluated and how they compare with their peers, they will respond with great enthusiasm.

As physicians, our hearts are in the right place.

We share a universal desire to help patients and are committed to providing them with the best care possible. We also have a need for feedback about how our care decisions impact our patients, communities, and institutions. More specifically, we strive to avoid inappropriate or wasteful care. The collective costs of unnecessary and inappropriate care are high (between $760 billion and $935 billion annually), but masked in these numbers is the reality that inappropriate care inflicts unnecessary discomfort and hardship on patients. So how can we do better?

Physician ratings are a common form of evaluation, providing insight into the practice patterns of doctors who excel at delivering high-quality care. Some provide guidance on what needs to improve and can serve as a reference for how our performance stacks up against that of our colleagues.

But the methodology for generating many physician ratings is often a “black box,” where a rating is the output, but the inputs and methodology aren’t always clear. This lack of transparency makes it difficult to motivate changes in physician behavior because doctors don’t trust the ratings.

Measuring what matters

How do we produce a complete picture of physician quality doctors can trust as a basis for improvement?

Most physician ratings methodologies provide only a snapshot of provider performance during a given window of time and fail to consider whether the care delivered was necessary. They don’t evaluate how well physicians follow evidence-based guidelines proven to produce better patient outcomes or whether they achieve those outcomes at a reasonable cost.

Additionally, very few ranking methodologies provide a comprehensive picture of provider performance, reliably assess a physician’s success treating a particular condition or whether they consistently avoid risky or unnecessary care.

It is essential for any ratings engine to measure these factors so they are useful for physicians, especially for those who want to use ratings as a benchmark to chart a path for improvement. For such ratings to be meaningful, rigorous analytical methods and statistical analyses should focus on three key areas of provider performance: appropriateness, effectiveness, and cost.

  • Appropriateness: Measures that seek to identify the providers delivering the right amount of care consistent with the latest evidence-based guidelines and benchmarked against peers. Appropriateness measures must include both measures of low-value care and measures of highly discretionary care.
  • Effectiveness: Measures that seek to characterize patient-relevant outcomes for a particular specialty or procedure and measures that evaluate the extent to which services we know to drive health were delivered.
  • Cost: Measures that extend beyond unit price to include provider-level differences in what was done (appropriateness) and the outcomes associated with what was done (effectiveness) in addition to differences in site of service and contracted rates.

From black box to glass box

For physicians to truly trust the results and spend valuable time and effort to improve, transparency is a necessity – we need to transform the black box into glass. Physicians want to know where their data comes from and what methodologies are used to calculate the ratings. They need to know the analyses are independently conducted with the oversight of a credible advisory board of doctors and data analysts from prominent academic and medical institutions. They must be confident the data is adjusted for risk, socioeconomic factors and associated conditions and that measures of appropriateness and quality are subject to clinical validation, ensuring the practices are accurately measured against their peers.

Armed with high-quality data about their own performance, doctors can target areas for improvement correlated to the associated clinical guideline. Improvements against those measures unequivocally will lead to better care quality, less inappropriate care, lower costs, and better patient outcomes. And when employers, health plans, and hospitals begin to cooperate on improvement strategies informed by these ratings, we can make meaningful improvements in care across geographic areas and regions as physicians align their practice to boost their ratings.

The goal with any ratings engine should be to help physicians improve. If we want them to engage, there can be no hidden agendas or secret measurements. Everyone in the process, especially doctors, benefits from clear, transparent, trustworthy information about their own performance.

Doctors are competitive by nature, especially when they are compared against their peers. If they can fully trust how they are evaluated and how they compare with their peers, they will respond with great enthusiasm. No doctor wants to contribute to the gigantic problem of wasteful, inappropriate or ineffective care. Equipped with tools they trust, they will blaze a trail toward continual improvement.

Photo: Prostock-Studio, Getty Images


Avatar photo
Avatar photo

Matthew Resnick

Matthew Resnick, MD, MPH, is chief medical officer at Embold Health, a data analytics company that measures provider, group, hospital and health system performance. Dr. Resnick guides provider engagement efforts to translate data insights into measurable improvements in the value of care delivered within local communities. He remains on the faculty of Vanderbilt University Medical Center, where he maintains an active health services research program.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Shares0
Shares0