Using NHSN C. difficile Infection Rates? Mind your denominator!

Over here in the US hinterland we're completing a systematic review of MDRO outcomes for CDC in cooperation with investigators in Salt Lake City. At the moment we're tackling C. difficile and are busily pouring through the literature. We've come across many good studies, such as an ICHE paper from early 2013 by Gase and colleagues from the New York State Dept. of Health that compared NY State CDI surveillance to NHSN in 30 hospitals. The authors noted an 80% agreement between the methods and thus recommended that NY State adopt the NHSN LabID method because of ease of implementation.

Building on that study, Haley and colleagues also from the NY State Dept of Health completed an analysis of the sources of bias in NHSN "Hospital Onset" CDI rate calculations using data from 124 NY hospitals. Their findings were published in the January 2014 issue of ICHE and were accompanied by a nice editorial by two of my former Maryland colleagues Jessina McGregor and Anthony Harris. The NY authors looked at how auditing, including outside labs, age adjustment and exclusion of "patient days not at risk in the denominator" would improve the calculation of hospital-onset CDI rates. As you can see by the portion of Table 2 that I pasted below, most of the corrections had minimal impact on the average hospital-onset CDI rates.  However, "exclusion of patient-days not at risk" had a huge impact on the calculated HO-CDI rate. The correct rate after controlling for all factors was 11.6/10,000 patient days; however, excluding auditing or outside labs, or age adjustment had minimal impact, whereas not excluding patient days not at risk from the denominator led to a rate that was 45% lower (6.4/10,000 pt-days).

The reason that eliminating "patient-days not at risk" from the denominator had such a huge impact is that the CDC NHS definition excludes CDI cases that occur in the first three days from the numerator but does not exclude patient-stays less than three days from the denominator. For example, a patient that stays only two days would not be at risk from contributing a HO-CDI case to the numerator but contributes their patient-days to the denominator.

This has several important implications.  One, not removing the patient days not at risk results in reported CDI rates that were much lower than they actually are. This occurs since many if not most patients have stays that are shorter than 4 days.  Second, as the authors state, "HO-CDI rates at hospitals with shorter LOS are biased downward more than the rates at hospitals with longer LOS." It seems to me that this artificially hurts the rates at tertiary-care and academic medical centers more than it would smaller community hospitals. We always hear how academic hospitals are falling behind, but it may have something to do with how rates are calculated, especially if we are including the wrong patient-days in the denominator.  It seems like this would be an easy fix - hospitals could just exclude the first three days from their patient-day calculations.  I hope this happens.


  1. This was one of the more interesting infection control methodological studies that I have read in some time and agree with the accompanying editorial that we need more like it. As pointed out by the authors and Eli, the magnitude of the bias resulting from not adjusting for days at risk is considerable. However, it is interesting that bias arising from days at risk did not result it substantially larger misclassification than other sources of bias examined in the study (~8% versus 3%, 6%, and 7% for bias attributable to outside labs, age distribution, and audit). Notably, the overall bias generated by all four sources resulted in 12% of facilities being misclassified. Eliminating any one of these sources of biases on a national scale is methodologically challenging (although the authors provide a potential alternative for dealing with the days-at-risk bias; last row in Table 2). It is attractive to think that misclassification could be substantially reduced by using the authors estimated denominator (last row, Table 2) but they did not provide data to assess how many facilities would be misclassified if this was the only bias correction method employed. It would seem that this would be relatively easy data to derive from their dataset. Perhaps they could address this in a letter to the editor?

    1. Thanks very much for your comment. The proportion misclassified is a big point I should have highlighted in my post. I wonder if the misclassification is random or would impact certain "types" of facilities more. The reason I highlighted the "days at risk" issue is that it would seem to result in a differential misclassification that would impact certain types of facilities more than others, and like the authors hypothesized this may hurt hospitals with longer lengths of stay (i.e. larger academic hospitals). I suspect like you suggested, the authors could also tell us in a letter to the editor which types of hospitals are subject to greater misclassification. If large academic hospitals are misclassified 20% of the time versus 2% for small community hospitals by the "days at risk" issue, it would be interesting.

    2. One more thing. I think a big issue with the "days at risk" denominator issue is that it results in a significant (45%) underestimation of the burden of hospital onset CDI. We need to correctly measure the burden of disease for CDI (and other HAI) so that we can properly communicate HAI burden (incidence/costs/mortality) so that proper attention is paid to our field through research funding and reimbursement for infection control and prevention activities and perhaps even ID consult billing (one can dream). I think a big reason why there is so little funding for IC departments and IC research is we don't yet have good estimates of burden of disease.

      Anyway, another reason that, like you, I really liked this study.


Post a Comment

Thanks for submitting your comment to the Controversies blog. To reduce spam, all comments will be reviewed by the blog moderator prior to publishing. However, all legitimate comments will be published, whether they agree with or oppose the content of the post.

Most Read Posts (Last 30 Days)