We need better HAI metrics

Over the past few months, my Chief Medical Officer and I met with Department Chairs to discuss incentive-based quality and safety metrics. In these meetings we agree upon metrics tied to financial incentives for achieving specified targets. The goal is to achieve a win-win--we improve patient care and the department benefits. We typically include at least one metric that is a healthcare associated infection.

With one of our surgical chairs, we were discussing surgical site infection rates. We all agreed that these are excellent metrics on which to focus. The more contentious issue was the target rate to be achieved. We suggested a 10% reduction from the current rate. His counter-argument was that his department's SSI rates were already very low and further reduction may not be achievable. He suggested that national benchmarks be used. That's a great idea, except that we no longer have national benchmarks. NHSN used to provide mean pooled infection rates with percentile scoring using data that were updated at regular intervals. This is no longer the case. Now NHSN provides SIRs (standardized infection ratios) calculated by comparing the observed number of infections to the expected. SIRs could be very helpful in this case, but in reality they aren't since the expected numbers of infection are derived from data collected from 2006 to 2008. So using the SIR, all I could tell the surgical chair is how his current SSI rates compare to other programs nationally a decade ago. If the expected number of infections is derived from data that are a decade old, it defeats the entire purpose of the SIR. For this purpose, the SIRs that NHSN produces are worthless.

Hospitals have two needs with regards to quality metrics: internal trending (are we getting better or worse over time?), and benchmarking (how do we compare to other hospitals?). The SIRs that are currently being produced can be used for internal trending, but not benchmarking. This leaves hospitals completely in the dark vis-a-vis their comparative performance. CDC is in the process of establishing new baselines for expected infection rates. This will be helpful for a year or so, but then the expected data will become old and benchmarking will again be flawed.

There seems to me to be an easy fix: establish two SIRs. The static SIR can use a fixed data set to derive the expected number of infections. This will allow hospitals to be able to internally trend their performance over time. The dynamic SIR would use data from the previous year, updated annually, to allow for comparative performance. This could be easily accomplished.

While we have seen some improvement in NHSN metrics, the overall trend, in my opinion, is that NHSN is moving towards metrics of lesser value (e.g., lab-based automated metrics), and I get the sense that they're not particularly interested in the viewpoint of hospitals. In the value-based reimbursement era, hospitals need valid comparative performance data more than ever, yet CDC appears out of touch and moving in a completely different direction.


  1. Oh YES! One frustration I have is that unless you enter data into NHSN you can't get a SIR for your SSI. Unfortunately NHSN has made data entry of some surgical procedures very difficult and cumbersome and time consuming. (Example, hips and spinal fusions- very hard for a data mining program to determine the level of the fusion, or the approach of the hip replacement....so an Infection Preventionist has to enter each on manually- no problem, for a facility doing about 10-20 procedures monthly....) In addition, for those not required surgeries- the benchmarks were based only on hospitals reporting, so do we really have a non-biased sample? Nope, we have hospitals that are able to devote resources to report and then there are the rest of us.
    I attended the NHSN annual HAI update in Atlanta this year, the most frustrating answer when trying to get the experts to address apparent flaws in their surveillance definitions- was that we as IPs should go tell our physicians how to document, so their diagnosis will match the definition. This seems to me not a good us of time, and also not really a way to actively reduce infection (which is the goal, correct?)
    In addition to SSI, I've become increasingly disenchanted with the LABID metric for C difficile reporting. Again, NHSN admits that they know as a proxy measure, they really aren't getting actual accurate data, but CMS has no problem using it as if it is actual data to decrease reimbursement. I imagine as the science changes around C. difficile, we will see some very interesting things happen to the LABID metrics.

  2. As one of IDSA’s representatives on the CDC’s Cdiff Risk Adjustment Expert Panel, I feel that I need to weigh in on this excellent post. Based on the first two meetings of this panel which also includes representation from SHEA, APIC and ASM, I can conclude that the CDC is very interested in improving the risk adjustment regression model in addition to re-baselining using 2015 data. I think that the idea of providing a static and a dynamic SIR for HAIs including CDI would be beneficial for the reasons stated and this is certainly something that we can discuss.

    At the same time, many of the clinicians on the panel have expressed their frustration related to using LabID for CDI. The sentiment, in my opinion, is that CDI LabID does not benefit patients. The intent may be to reduce reporting burden, but it is driving practices that are mostly geared at affecting the LabID measure only and not clinical outcomes. Perhaps the unique epidemiology of CDI does not lend itself to using LabID unlike some of the other HAIs.

    The other frustration expressed during these discussions is that CDC/NHSN have very limited data inputs to enable risk adjustments and level the playing field. For example, some of the most important factors that affect the risk adjustment are whether hospitals perform transplants or a measure of a facility’s aggressiveness in testing patients. Appropriately, there is great concern on the part of CDC/NHSN that requiring more data from facilities would represent a further burden on the thin resources of our infection preventionists. This leaves us in the “garbage in, garbage out” predicament. Because of these issues coupled with the fact that the diagnosis and treatment of CDI is in such flux at the moment, using CDI LabID SIR even with the new 2015 baseline removes the VALUE in Value Based Purchasing programs and cannot possibly be used to fairly reward or punish facilities.

    Philip Robinson, MD

    1. Appreciate your comments. We have shown that the current model to account for PCR does not appropriately risk adjust. So one way to improve the LabID CDI metric would be to report toxin positive cases only (as is done in the UK) for hospitals that do both toxin detection and PCR. This would make the metric more meaningful. In many hospitals the majority of the cases are toxin negative and the implications of that unclear.

  3. Mike: very important observation/topic about incentive-based quality and safety metrics. Your win/win of improving patient care and the department benefits is essential to moving both goals forward, but developing the right metrics is certainly foundational. I agree completely with your comment that ‘Hospitals have two needs with regards to quality metrics: internal trending (are we getting better or worse over time?), and benchmarking (how do we compare to other hospitals?)’ but I think there is another need with respect to hospital quality metrics that is often times overlooked, especially by hospital administration. It’s easy to understand and reward improvement in achieving reduction to target metrics such as SSI rates, but even the best programs get to a level where there isn’t much room to go lower. Programs (especially those without enough resources to properly staff) often change emphasis to another HAI target when one is brought ‘under control’ and the original one can decrement. I think that sustained, year-on-year achievement of low SIR maintenance at target should also be measured, reported and incented. This would be your "Dynamic SIR" measured over time. I think this is an important aspect of comparative performance across hospitals and one that should be easy for CDC to measure with historical NHSN measurement. Finally, we should insist that not only should the department (in your example, surgical departments) benefit, but the ID physician or group leading the effort across hospital departments should also benefit from the incentive when the target reductions or maintenance in "Dynamic SIR" are achieved in the programs they lead.

    1. Agree completely, Dan. And I suspect that your last point rarely occurs, as we tend not to advocate well for ourselves.


Post a Comment

Thanks for submitting your comment to the Controversies blog. To reduce spam, all comments will be reviewed by the blog moderator prior to publishing. However, all legitimate comments will be published, whether they agree with or oppose the content of the post.

Most Read Posts (Last 30 Days)