The Centers for Medicare and Medicaid Services runs a website that offers seemingly valuable information about the relative performance of hospitals throughout the United States. The system, known as Hospital Compare (HC), was developed as a handy tool for people trying to figure out which of their local hospitals is best. However, the HC scoring is sometimes woefully inaccurate, research suggests, finding that some of the country’s best hospitals receive unusually low ratings, while many of the smallest hospitals get undue boosts.
This inaccuracy can have serious consequences—if a patient experiencing shortness of breath, for example, seeks out a top hospital but ends up at a lower-quality one with inadequate care, the result could potentially be fatal.
University of Pennsylvania’s Edward I. George, Paul R. Rosenbaum, and Jeffrey H. Silber; Chicago Booth’s Veronika Ročková; and INSEAD’s Ville A. Satopää analyzed data from Medicare billing records for 377,615 patients treated at 4,289 hospitals, and focused on mortality rates after hospital admission for a heart attack, a key indicator of hospital quality. They discover that aggregated HC mortality-rate predictions did not reflect actual mortality-rate patterns in the data.
This miscalibration was most severe for the smallest hospitals, those with the lowest volume of patients and, therefore, less data. Because the HC approach lacks the flexibility to cope with this sparseness, it compensates by adjusting mortality estimates from small hospitals to match the national average, severely underestimating risks at small hospitals, the research finds. This adjustment is at odds with the fact—established by existing research and clear from the Medicare data—that the rate of heart-attack deaths at small hospitals tends to be higher than the national average.
The average mortality rate of 28 percent for heart-attack admissions at small hospitals was strikingly higher than the 12 percent at large hospitals.
To overcome the deficiencies of the HC method, the researchers developed a Bayesian statistical model for mortality-rate predictions that adds several additional variables, including hospital volume, nurse-to-bed ratio, and the hospital’s technological capability. This model more accurately pinpointed the probability that patients would die within 30 days of a heart attack—and indicated that smaller hospitals in the data set tended to have poorer outcomes. Not all small hospitals performed worse than big ones, but it was the case on average.
Hospital mortality rates depend on the initial sickness of the patients treated, of course. The average mortality rate of 28 percent at small hospitals was strikingly higher than the 12 percent at large hospitals. When the researchers controlled for the health of incoming patients, the average mortality rate at large hospitals rose to 20 percent—but that still left a mortality-rate gap, one undetected by the HC method but closely predicted by the researchers’ approach.
For public reporting, it is essential to adjust for such patient-mix discrepancies, yet the HC method biases all mortality predictions toward the national average. The researchers propose a different way of standardizing, which they say ultimately removes patient-mix differences from comparisons without compromising their method’s ability to reflect mortality-rate patterns in the data.
The researchers say that the US government should increase the accuracy and meaningfulness of HC by adopting a more comprehensive model that incorporates hospital characteristics as variables, and by using their method of standardization to provide more honest hospital-to-hospital comparisons. “Beyond providing a tool to help the public make more informed health-care decisions, such an improved HC can serve as a valuable resource for understanding and improving America’s health-care system,” says Ročková.