It's not about you. Really. Well, most of you

After my posts discussing construct validity and other issues with studies attempting to understand the effect of eliminating contact precautions, several folks expressed concern that I was specifically talking about their proposed study, published study, abstract or hospital.  Well, I can assure you that none of my recent posts are about your studies or hospitals. Well, expect one, but not the one you might be thinking about.

While it's true that there have been many notable back and forth discussions about contact precautions on this blog from its initiation, I greatly respect Dan’s and Mike’s right to implement infection control in their hospital, their way. For example, we might even agree that contact precautions reduce transmission of MRSA by a certain amount; however, they might decide that MRSA is rare enough in their hospital that contact precautions are not locally cost-effective. I might even agree. Others colleagues have been concerned that I’m commenting on their specific study including ones presented yesterday at ICPIC. Yet, I never attended the session where several studies of ESBL control methods were presented. Unfortunately, jet-lag prevented me from making the earliest session, so I can’t claim specific knowledge or concerns about those studies.

However, I will share what prompted my recent posts (and hopefully a few more). A few weeks ago I had the opportunity to hear about a planned study that will examine the effect of discontinuing contact precautions. This planned study is large, includes a diverse set of hospitals and has a randomized design. However, the study only plans to discontinue contact precautions in non-ICU settings and only track clinical cultures. So, in my opinion, the study has strong generalizability, good internal validity but very poor construct validity.

During the discussion of the planned study, I mentioned this issue. The PI on the study was very quick to say that sure, it's not a perfect study but isn’t some information better than nothing? Of course the PI knows that it is hard for other scientists to say no to that question. We all like data. However, in this case the answer is a resounding no. Any large study will have economic and other impacts. For example, one opportunity cost of funding a large study with poor construct validity is not funding another study that could have a stronger design. Additionally, if you aren’t measuring the outcome properly, how can you tell if your intervention is harming patients? This is a human subjects issue, potentially. So, hopefully they will add methods that can measure colonization on discharge or better yet track post-discharge infections.

There you have it. You can be assured that I wasn’t thinking about your study or hospital. Of course, I can’t promise the same in the future.

Comments

  1. Totally agree with this criticism. The main problem is that current practice in hospitals regarding adherence to most aspects of infection control, including contact precautions is so poor that abandoning these measures will not result in a measurable effect in daily practice. This may lead to the false conclusion that these measures do not have an effect when applied in the intended way. The cluster-randomized study we presented at ICPIC on ESBL showed no overall effect of using single rooms. However, there was a high cross-over rate and adherence to several components of contact precautions was moderate. If we do a per protocol or as treated analysis a clear effect was found. I am afraid that most studies that conclude that contact precautions are useless should be considered "fake news". I hope the infection control community can do a better job in the future.

    ReplyDelete
  2. OK, now I have to take another quick break from my vacation in an effort to keep this blog from becoming uncontroversial!

    In no particular order, here's what I've learned from this blog post and related discussion:

    1. We should pass up the opportunity to perform a large multicenter pragmatic trial of the impact of a major change in practice (how we apply contact precautions (CP)) due to limitations in the population to which the change is applied (non-ICU only) and the outcome measure proposed.

    I think this is a mistake. The proposed measure includes the “MRSA LabID event”, a metric deemed to be so important by CDC that it is publicly reported and linked to US hospital payments via two pay for performance programs (HAC and VBP). All studies have limitations—that’s what critical thinking, peer review, and discussion sections are for!

    2. It is foolish to use ongoing surveillance data in your own healthcare facility to observe the impact of a major change in infection prevention practice—either because the outcome is so rare that your observations will be severely underpowered, or because the practice you are changing is done so crappily (yes, I’m inventing this word) that it won’t have an impact one way or another.

    3. As a corollary to #2, it’s foolish to try to publish such single-center before-after experiences, because the results will be “fake news” and will mislead the ignorant masses into taking breathtakingly harmful steps (like changing the way they apply CP) that are bound to hurt people (even if the particular practice that is being changed (use of CP for MRSA/VRE carriers in endemic settings) has little published literature to support its use, and a couple controlled trials that are negative).

    Most practicing hospital epidemiologists I know are pragmatists (or quickly become pragmatists), having to juggle scores of competing patient safety concerns in their daily practice. They are much more interested in effectiveness (impact of a practice under “real world” conditions) than they are in efficacy (impact of a practice under ideal conditions). Don’t be surprised if they feel slightly insulted when you inform them that not only are their own observations after a practice change of no relevance, but also that it is not possible to design a study large enough to justify a change in practice because the impact on the outcomes they care about is too small to measure.

    ReplyDelete
    Replies
    1. Jan - great point. The construct validity of the exposure is just as important as the outcome. Both are flawed in these poorly designed studies.

      Dan - you can do whatever you want. Just don't tell us that what you're seeing is an association or causation. But if it makes you feel better, by all means, I'm not going to stop you. It's unfortunate that pragmatic hospital epidemiologists might be contributing to the spread of MDRO in patients who have passed through their hospitals. You are maybe wishing the bacteria weren't invisible?

      Delete
    2. Couldn't agree with Dan more. Practicing (ie, non theoretical) hospital epidemiologists must be pragmatic as they are held to account for the infection rates in their hospitals. I have now been responsible for discontinuing contact precautions at 2 large academic medical centers and I put a great deal of thought into this change knowing that my job was potentially on the line if it failed. It's a decision that must balance competing priorities, like most difficult decisions. Just one question might be: do we harm patients due to blocking beds in double bed patient rooms when a patient in one of the beds has MRSA or VRE, while other patients who need to be admitted stay in the Emergency Department or are placed on a unit where the optimal resources are lacking?

      We just posted our lowest CLABSI rate (and raw number of CLABSI) EVER, 1.5 years after stopping contact precautions for MRSA and VRE. And this occurred during a period where we have had our highest inpatient census ever. The hospital is crazy busy. Over the post-CP period, we've watched a steady decline in CLABSI. While it may be true that the impact of contact precautions can extend beyond hospitalization (if in fact contact precautions are effective), don't you think that if an intervention were as powerful as you think contact precautions is, we would demonstrate a rising CLABSI rate or least a stable rate, not a consistently falling rate to the lowest we've ever recorded? We certainly have a subpopulation of very ill patients that have very long lengths of stay. Moreover, one would postulate that our lab-ID MRSA bacteremia SIR should also be impacted if we eliminated an effective intervention that was reducing transmission (as Dan mentions above), but it's 52% lower than expected. I honestly just don't understand the fascination with a practice (CP) that is patient, provider and health system unfriendly in so many ways, and for which compelling data on effectiveness are lacking. Increasing numbers of hospitals are moving on.

      Delete
    3. That's very touching (that you agree with Dan). But the era of claiming n=1 or poorly designed studies as evidence should be over. The population-level impacts are too large. I'm not even sure I understand your clabsi point but congrats. Anecdote, schmanecdote?

      And to repeat because you don't seem to be getting my points. Everyone can implement things in their own hospital their own way. But you can't claim there is efficacy or effectiveness in your approach towards the prevention of MDRO if you do a crap study.

      You have been a vocal critic of CP forever, it's just time that others speak up for the other side. I'm gonna be the Elizabeth Warren in this area.

      Delete

Post a Comment

Thanks for submitting your comment to the Controversies blog. To reduce spam, all comments will be reviewed by the blog moderator prior to publishing. However, all legitimate comments will be published, whether they agree with or oppose the content of the post.

Most Read Posts (Last 30 Days)