De-implementation or "stopping practices that lack supporting evidence" is a popular topic in infection control circles. In fact, just yesterday I read a discussion where the authors suggested we no longer need to practice hand hygiene after removing gloves when caring for patients with CDI. I guess there aren't randomized trials - you can't be serious!
Which brings me to a recent review in the NEJM by Laura Mauri and Ralph D'Agostino titled "Challenges in the Design and Interpretation of Noninferiority Trials." This review is very well written - perhaps required reading for epidemiology students well written. In infection control, it is important to recognize that most de-implementation studies are really non-inferiority trials. For example, when we discontinue contact precautions, we are really suggesting that "stopping contact precautions" is non-inferior to continuing contact precautions in preventing MDRO transmission - of course ignoring that compliance with contact precautions is probably so poor that they are basically the same intervention!
In the contact precautions example, we would be testing whether stopping contact precautions "is not worse than the control (continuing contact precautions) by an acceptably small amount, with a given degree of confidence." The null hypothesis would be that discontinuing contact precautions leads to higher transmission of MDRO (i.e. is worse) and rejection of the null hypothesis is used to support the claim that discontinuing CP is noninferior. Here I suggest you stare at Figure 1 for a bit (probably easier to read in the paper with the description of each condition, but I have included it below anyway)
Further discussion about the design and analysis of these trials is way beyond the scope of a humble blog post; however, the authors include nice descriptions of methods for deriving noninferiority margins, the "constancy assumption" and statistical analysis approaches. But their 6th and 7th components of noninferiority trials are worth mentioning from an infection control standpoint:
6) Adequate ascertainment of outcomes: The authors write that "incomplete or inaccurate ascertainment of outcomes, as a result of loss to follow-up, treatment crossover or nonadherence, or outcomes that are difficult to measure or subjective, may cause the treatments being compared to falsely appear similar." I would suggest that studies that seek to de-implement contact precautions that do not include admission/discharge surveillance cultures seeking to detect transmission events fail this criteria.
7) Issues with "Intention-to-Treat" in noninferiority designs: In a superiority studies (typical RCTs), intention-to-treat analysis, where anyone who receives the treatment is included even if they get one dose, is the gold standard. The authors write: "In a noninferiority study, however, if some patients did not receive the full course of the assigned treatment, an intention-to-treat analysis may produce a bias toward a false positive conclusion of noninferiority by narrowing the difference between the treatments. In some instances, a per-protocol analysis, which excludes patients who did not meet the inclusion criteria or did not receive the randomized, per-protocol assignment, may be preferable in a noninferiority trial. However, a per-protocol analysis may include fewer participants and introduce postrandomization bias. In general, both the intention-to-treat and per-protocol data sets are important. We suggest analyzing both sets and examining the results for consistency."
Just some things to think about as we read the coming wave of de-implementation studies in infection control including diagnostic stewardship.
Pondering vexing issues in infection prevention and control
Tuesday, October 31, 2017
Monday, October 16, 2017
Wrong answer
This morning I stumbled upon this piece, Wrong Answer (free full text here), by Rachel Aviv in The New Yorker. It's an old article from 2014, but a wonderfully written, compelling, sad tale. It's the story of how high stakes standardized testing of middle school students in economically disadvantaged neighborhoods in Atlanta led to cheating by teachers. It focuses on Damany Lewis, a superb teacher totally committed to his students, who tirelessly worked to improve his students' math knowledge and was successful in doing so, but not successful enough to hit an unreachable goal. Responding to increasing pressure to raise testing scores, he and other teachers began to change the answers on students' tests. We all know that cheating is unethical, and at first glance I bet most of us would argue to punish those involved, but read this entire piece (warning: it's long), and you're likely to soften your stance. The consequences of not meeting unreasonable targets were so severe that the teachers felt compelled to cheat in the best interest of their students.
Now take this article from the education setting into the world of healthcare epidemiology, and if you're like me, there will be chills going down your spine as you read it. It should be required reading for anyone who works in healthcare quality or the key stakeholders in this space, from those at the front lines to those who work in professional societies, and to those who create policy at the state or national level. There are also lessons here for patients and patient advocates.
What happened in Atlanta shouldn't surprise us. In 1979, Donald Campbell, a psychologist, published a paper, the crux of which has become known as Campbell's law. I wasn't aware of this until I read Aviv's article. It states: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Moving this to our world, you can delete the word "social" in Campbell's law and take a look at Dan Sexton's commentary, Casablanca Redux, from 2012. Here's an excerpt:
While reading Rachel Aviv's paper, I wondered: Do we ever ignore results (i.e., infection rates) that seem too good to be true like the educational administrators in Atlanta did? Do we critically analyze surprisingly good results to the same degree as we do surprisingly bad results? In Damany Lewis' case did the end justify the means? Is there ever a situation where I could be pushed to a similar point as Lewis?
Now take this article from the education setting into the world of healthcare epidemiology, and if you're like me, there will be chills going down your spine as you read it. It should be required reading for anyone who works in healthcare quality or the key stakeholders in this space, from those at the front lines to those who work in professional societies, and to those who create policy at the state or national level. There are also lessons here for patients and patient advocates.
![]() |
| Donald Campbell |
Our informal discussions with other hospital epidemiologists, our experience in evaluating the source of infection in hundreds of bacteremic intensive care unit (ICU) patients, and common sense have led us to suspect that many hospitals do not accurately report their true rates of CLABSI, using current NHSN definitions. In some cases this may reflect an unwillingness of local staff to accept these definitions as accurate or fair; in other situations it may reflect an unconscious desire to hedge or reduce their rate of CLABSI to avoid criticism and negative consequences from their local supervisors in the press, clinicians, or the general public who review their publicly reported data... If clinicians inappropriately or illogically fear or anticipate negative feedback about the rate of CLABSI in their institutions, they may consciously or subconsciously fail to obtain blood culture results for every patient with a possible or likely BSI. Simply put: no culture equals no infection, using standard definitions of CLABSI.At some level, we are all complicit in this. And depending on the action, it may not be the wrong thing to do. In fact, it may benefit the patient. For example, better diagnostic stewardship in the form of appropriately ordering fewer urine cultures not only lowers CAUTI rates but reduces antibiotic utilization with several resultant benefits. Still it's important to note that the primary impetus for this was to lower HAI rates. We take into consideration how a new diagnostic test may impact HAI rates and may even allow that to impact the decision to implement (see an excellent paper by Dan on this here). We may allow clinicians to censor infections that infection preventionists have detected even though the cases meet NHSN definitions. And at the extreme, hospitals may engage in practices that may harm patients in order to reduce publicly reported HAI rates. In a recent publication on how physicians in training view quality initiatives, a dirty secret was elicited from a resident during a focus group at an academic medial center: “There’s like the central line infection protocols…. If you suspect that anybody has any type of bacteremia, you don’t do a blood culture, you just do a urine culture and pull the lines … we just don’t even test for it because the quality improvement then like marks you off.”
While reading Rachel Aviv's paper, I wondered: Do we ever ignore results (i.e., infection rates) that seem too good to be true like the educational administrators in Atlanta did? Do we critically analyze surprisingly good results to the same degree as we do surprisingly bad results? In Damany Lewis' case did the end justify the means? Is there ever a situation where I could be pushed to a similar point as Lewis?
The Atlanta school system harmed students and teachers in a thoughtless quest to improve quality. There were no winners. Sadly, the response to the cheating scandal was to raise the stakes for test scores even higher. With pay for performance the same is happening in health care.
Sunday, October 15, 2017
Chlorhexidine bathing outside the ICU: Await the ABATE!
Daily chlorhexidine (CHG) bathing has become routine in many ICUs. Given that more healthcare-associated infections (HAIs) (including more central-line associated bloodstream infections (CLABSIs)) occur outside of the ICU, many hospitals have also implemented this practice on general medicine and surgical wards. However, to this point there are few data to support the effectiveness of CHG bathing outside of ICUs.
So I was very excited to hear Susan Huang present the results of the Active Bathing to Eliminate Infection (ABATE) study at IDWeek last week. Similar to the REDUCE MRSA study, this 53 hospital cluster-randomized trial took place in the Hospital Corporation of America (HCA) system. Units were randomized to routine care or decolonization (this consisted of daily CHG [4% rinse-off shower or 2% leave-on bed bath] with addition of mupirocin nasal ointment for 5 days if + for MRSA by history or culture/screen) for a 21-month intervention period (after collecting baseline data for 12 months). The primary outcome was MRSA or VRE clinical isolates, and the main secondary outcome was any bloodstream isolate attributed to the unit (for common commensals, 2 or more + cultures).
With the requisite reminder that it is always best to wait for the peer-reviewed publication to make firm conclusions, the results presented at IDWeek suggest the likely take-home from this study: No measurable benefit in the entire population, but significant reductions in MRSA/VRE clinical cultures and bloodstream infection in the subgroup with devices (central lines, midlines, and lumbar drains). This subgroup represented 12% of the study population but accounted for 34% of all MRSA/VRE events and 59% of bloodstream infections. The additional reductions in the decolonization arm for this subgroup (compared with routine care) were 32% for MRSA/VRE cultures and 28% for BSI (both highly statistically significant).
Once published, these findings will leave infection prevention programs with some interesting decisions. A quick take might be, “OK, let’s just use CHG in those non-ICU patients with devices (or just central lines).” However, this isn’t what the ABATE trial evaluated—it showed a substantial reduction in MRSA/VRE/BSI outcomes in patients with devices when everyone else was also receiving CHG (+/- mupirocin). To assume that the decolonization of the non-device population had no beneficial effect on those with devices is to discount any potential role for reduction in pathogen transmission between non-device and device patients. Another tricky question has to do with the role of mupirocin—adding this agent to CHG for known MRSA carriers without knowing how important this component of the intervention was adds some logistical complexity, cost, and antimicrobial resistance concerns (the investigators are also doing the microbiology work to assess for CHG and mupirocin resistance emergence).
We’ll revisit this study once it is published and all the details are available. For now we should congratulate Susan and the entire ABATE trial team for another tremendous contribution!
Monday, October 9, 2017
IDWeek 2017 is in the books
I'm back from IDWeek 2017 in San Diego -- always good to reconnect with old friends, meet new colleagues, and hear the latest in infection prevention and antibiotic stewardship science and practice. This year I noted an increase in pro-con and clinical controversies sessions, which I think are so valuable for those of us struggling day-to-day with issues where the evidence base is absent or opinion is conflicting. If you didn't get a chance to attend, fear not, as I am sure there will be more conference-related news emerging over the coming weeks (all 6 of your favorite bloggers made an appearance).
Of course, it's never too early to think about next year -- IDWeek 2018 heads back to the left coast, this time it's San Francisco, October 3-7, 2018 (mark your calendars!). I'm among the SHEA contingent for the planning committee (along with Hilary), and we're very interested in program suggestions -- please send them our way.
I'll leave you with one meeting tidbit -- Shelley Magill presented the results from the CDC's 2015 HAI Prevalence study, a follow up to their 2011 survey that was published in the NEJM. In the 4 years since the first survey (using the same hospitals included previously):
- The HAI prevalence rate among hospitalized acute care patients fell from 4.1% to 3.2% (a 22% decrease)
- Central line and urinary catheter use were both significantly lower
- Healthcare-associated UTIs and SSIs significantly decreased
Nice to see some positive news to reflect the extensive HAI prevention efforts nationwide. Of note, antimicrobial use stayed stable, reinforcing the need for increased antibiotic stewardship efforts. Given the enhanced focus in this area, I am hopeful that the next prevalence survey will show improvement on that end as well.
Subscribe to:
Posts (Atom)
OSHA! OSHA! OSHA!
In many parts of the country, as rates of COVID-19 are declining and vaccination coverage is increasing (albeit with substantial variati...
-
In many parts of the country, as rates of COVID-19 are declining and vaccination coverage is increasing (albeit with substantial variati...
-
This is a guest post by Jorge Salinas, MD, Hospital Epidemiologist at the University of Iowa Hospitals & Clinics. There is virtually no...
-
I’m surprised that we can’t stop arguing about the modes of SARS-CoV-2 transmission, despite the fact that most experts (including our frie...



