The Playing Field is Still Warped . . .

Over the past decade, public reporting of facility-specific HAI performance has had a dramatic impact on infection prevention programs.  The increased awareness of HAI prevention by an increasing diversity of stakeholders (administrators, payors, patients, etc.) has arguably led to increased emphasis on HAI prevention and, in many cases, more resources to infection prevention programs.  These changes were very apparent at my institution where we increased our IP staff from 3 infection preventionists, 2 hospital epidemiologists and one administrative assistant to 9 IPs, 2 data analysts, 1 chart abstractor, 1 program coordinator, and partial support for 4 HEs (not including the antibiotic stewardship program support).  HAI performance is front and center among our annual quality goals, tied to department chair incentives, and the days since the last HAI are posted publically on our inpatient units for all to see.  Most importantly, our frontline healthcare workers understand and routinely discuss what were once surveillance acronyms like "CLABSI" and "CAUTI."  We've seen remarkable reductions in HAIs (and, more importantly, the associated patient harm) during this time (e.g. CLABSI in our ICUs have reduced 80% in the past 8 years). 

Our story is not unique.  Many hospitals have noted marked reductions in HAI rates.  One could argue that many, if not all, of the "low" and even "middle" hanging fruits have been tackled, and we are starting to reach the area where uncontrollable differences in patient risk factors/case mix may lead to different HAI performance across facilities.  With HAI performance tied to increasing financial consequences, however, the need to better insure a level and fair playing field across facilities is growing.  In this context, the recent report from the HHS Office of Inspector General on the CMS HAI reporting is very interesting.  The report focused on the validation of reported HAI data.  They found that while sufficient data were validated (as per regulatory requirements) and 99% of reviewed hospitals passed validation (only 6 failed), concerns were raised regarding how hospitals were selected for validation:

"However, CMS’s approach to selecting hospitals for validation for payment year 2016 made it less likely to identify gaming of quality reporting (i.e., hospitals’ manipulating data to improve their scores). CMS did not include any hospitals in its targeted sample on the basis of their having aberrant data patterns. Targeting hospitals with aberrant patterns for further review could help identify inaccurate reporting and protect the integrity of programs that make quality-based payment adjustments."

Gaming strategies that may be employed include overculturing (to designate an infection as POA), underculturing (if no blood cultures are collected . . . voila! No CLABSIs!), and adjudication/clinician veto ("I know that met the definition for SSI, but it was just a seroma . . . that I treated with antibiotics . . . uh, prophylactically . . . yeah, that's it!).  We have no clue how widespread these practices may be, but the OIG report notes that the current validation strategy should be enhanced to better capture gaming.  With the growing financial consequences placed on HAI prevention, it is paramount that everyone plays fair to better level the playing field.  Now, if we could also get more patient risk factors into the SIR models . . .


  1. Great post, Tom. It's instructive to read the methods in the OIG report carefully--I'm no expert on validation approaches, but it's hard for me to see how the methods they currently use will identify hospitals who are gaming their HAI rates. Not only are they not using analytics to target outlier hospitals for validation, but the lab-based approach they use doesn't appear effective to identify hospitals that change diagnostic approaches to reduce rates. For example, if charts are selected for review based upon culture results, how would they detect underculturing? And reviewing only 12 charts in a large hospital, with allowance for up to 25% "unreliability"?

    We need much better validation if we're going to continue to link HAI metrics to financial rewards and penalties.


Post a Comment

Thanks for submitting your comment to the Controversies blog. To reduce spam, all comments will be reviewed by the blog moderator prior to publishing. However, all legitimate comments will be published, whether they agree with or oppose the content of the post.

Most Read Posts (Last 30 Days)