Stakeholders should exercise caution when using hospital quality rating systems to identify top-performing hospitals, according to a new report from the Rating the Raters Initiative.
The article, published in NEJM Catalyst, evaluates four public hospital quality rating programs on an A to F scale, in which A represented an ideal rating system with little chance of misclassifying hospital performance, and F represented a poor rating system with greater likelihood for misclassification.
Authors believed that the U.S. News & World Report rating system, which received a B, was most responsive to stakeholder feedback and changes in measurement science. The Centers for Medicare & Medicaid Services Star Ratings received a C, Leapfrog received a C-, and Healthgrades received a D+.
Authors evaluated rating systems using six criteria:
- potential for misclassification of hospital performance;
- scientific acceptability;
- iterative improvement;
- transparency; and
Researchers developed standardized fact sheets that included the number of hospitals reviewed, number of elements included, and risk-adjustment methodology selected for each system. They identified several limitations to public reporting of hospital quality, including:
- data and measurement limitations;
- lack of robust data audits;
- variation in composite measure development methods;
- lack of formal peer review;
- potential financial conflicts from monetizing ratings; and
- diversity in hospital type and volume.
Proposed suggestions to mitigate these limitations included stratifying hospitals into peer volume groups or relying more heavily on process and patient experience measures that are less influenced by patient volume.
Contact Senior Director of Policy Erin O’Malley at firstname.lastname@example.org or 202.585.0127 with questions.