Sunday, July 24, 2016

CAUTI: A diagnosis in search of a disease

Last month, we had a series of posts on CAUTI that generated a great deal of interest. Yesterday, a reader of the blog alerted me to a recently published paper that I hadn't seen on the implementation of a CAUTI prevention bundle at the Mayo Clinic. Like the recent NEJM paper led by U. of Michigan investigators on CAUTI reduction, this was a quasi-experimental before and after study; however, it was performed in a single center. The reduction in CAUTI was substantial at 30%. Importantly, the Mayo bundle was implemented after successfully reducing urinary catheter utilization, and it contained a critical component not found in the Michigan bundle: culturing urine only when the indication was clear. In addition to education, the EMR interface forced providers ordering a urine culture on a patient with a urinary catheter to specify the indication for the culture. The pick list options were: fever in a kidney tranplant recipient; fever in a pregnant patient; neutropenic fever; fever after urologic procedure/surgery; fever and known urinary tract obstruction; unexplained suprapubic or flank pain; spinal cord injury patient with new or worsening spasticity, autonomic hyperreflexia, maliase, lethargy, or sense of unease; at admission of chronically catheterized patient with new fever or unexplained mental status changes; septic shock; or other (free text). The impact of this change in the EMR resulted in a 50% reduction in urine cultures obtained, and suggests that we significantly overdiagnose CAUTI.

CAUTI increasingly appears to be a diagnosis in search of a disease. Will our delusional obscession with it ever end?

Sunday, July 17, 2016

A shock to the system

Back in April of 2011, in a blog post about the VA MRSA Initiative, I wrote:
“It seems that HAIs are falling across the VA system, probably as a result of an important culture change, and improved application of so-called "horizontal" approaches to infection prevention. Bravo to the VA!”
Of course, in addition to praising the VA, I called into question the one “vertical” component of the MRSA initiative (MRSA screening/isolation) based upon the dramatic reductions in VRE and C. difficile that accompanied the intervention, and the evidence that very little of the reported MRSA reduction could have been due to reduced MRSA transmission.

Thanks to Michi Goto and colleagues, we now have even more evidence that the MRSA Initiative coincided with a fundamental shift in infection prevention in the VA healthcare system. Their recently published CID paper adds hospital-onset Gram-negative bacteremia to the list of HAIs that saw significant reductions after the MRSA Initiative was implemented (see graph above). I will outsource most of the rest of this blog post to Maryn McKenna and Marc Bonten, please read their excellent analyses.

Marc ends his blog post by observing that “we still don’t know how” the VA achieved these HAI reductions—in particular, we don’t know the relative importance of the most expensive, resource-intensive aspect of the program (MRSA screening). As someone who served as a VA hospital epidemiologist during the implementation of the initiative, I’ll give my completely unsupported, data-free opinion: the MRSA screening was both unnecessary and essential to the VA’s success. 

Unnecessary because the HAI reductions, as Marc’s group demonstrated, were not achieved via the prevention of bacterial transmission that MRSA screening/isolation seeks to achieve. 

Essential because it served as a “shock to the system” within the VA. The complexity of implementing any universal screening and isolation program requires multidisciplinary buy-in, major investments in laboratory and infection prevention resources (in the VA system, funding for molecular detection and funding of a new “MRSA coordinator” position, essentially an extra infection preventionist). It’s this kind of system-wide investment that can change the culture, moving basic infection control practice higher on everyone’s priority list. 

We should be very surprised if any complicated, resource-intensive, universal screening intervention (including, now, universal C. difficile screening) doesn't move the needle in a quasi-experimental (before-after) study design.  We also shouldn't be surprised when that same intervention fails to impress in cluster-randomized trials, as we've seen with MRSA screening in STAR*ICU, REDUCE MRSA, MOSAR, etc.

Monday, July 11, 2016

I think we found the lowest paid doctor....

Today’s NY Times is highlighting this JAMA Internal Medicine article about the salary disparity between male and female physicians at 24 public medical schools. A major strength of this study is that the findings are not based upon self-report or survey—public medical school salaries are subject to freedom of information laws, so the data sources were government financial records. 

The main finding: after adjustment for multiple variables that impact salary, female physicians are still paid almost 20K less than their male counterparts. The graphic below is from the NYT article, using data from Table 3 of the JAMA IM paper:


As you can see, ID is very near the bottom in salary—for adjusted salary, among women only neurologists are paid less, and among men only family physicians. But wait, family medicine and neurology each require 4 years or fewer of graduate medical training, whereas ID requires 5-6 years. So by my rough calculation, adjusting for ‘years of additional training required to practice’, the lowest paid doctor is....drum roll please:

The female ID physician!

This point about years of training is very important as it relates to recruitment into ID. Check out the difference in salary between ID and Internal Medicine above. For how many subspecialties do additional years of training reduce earning potential? We’ve covered this ground before, but of course this helps explain the two graphs below (reference here). 





Sunday, July 10, 2016

We need better HAI metrics

Over the past few months, my Chief Medical Officer and I met with Department Chairs to discuss incentive-based quality and safety metrics. In these meetings we agree upon metrics tied to financial incentives for achieving specified targets. The goal is to achieve a win-win--we improve patient care and the department benefits. We typically include at least one metric that is a healthcare associated infection.

With one of our surgical chairs, we were discussing surgical site infection rates. We all agreed that these are excellent metrics on which to focus. The more contentious issue was the target rate to be achieved. We suggested a 10% reduction from the current rate. His counter-argument was that his department's SSI rates were already very low and further reduction may not be achievable. He suggested that national benchmarks be used. That's a great idea, except that we no longer have national benchmarks. NHSN used to provide mean pooled infection rates with percentile scoring using data that were updated at regular intervals. This is no longer the case. Now NHSN provides SIRs (standardized infection ratios) calculated by comparing the observed number of infections to the expected. SIRs could be very helpful in this case, but in reality they aren't since the expected numbers of infection are derived from data collected from 2006 to 2008. So using the SIR, all I could tell the surgical chair is how his current SSI rates compare to other programs nationally a decade ago. If the expected number of infections is derived from data that are a decade old, it defeats the entire purpose of the SIR. For this purpose, the SIRs that NHSN produces are worthless.

Hospitals have two needs with regards to quality metrics: internal trending (are we getting better or worse over time?), and benchmarking (how do we compare to other hospitals?). The SIRs that are currently being produced can be used for internal trending, but not benchmarking. This leaves hospitals completely in the dark vis-a-vis their comparative performance. CDC is in the process of establishing new baselines for expected infection rates. This will be helpful for a year or so, but then the expected data will become old and benchmarking will again be flawed.

There seems to me to be an easy fix: establish two SIRs. The static SIR can use a fixed data set to derive the expected number of infections. This will allow hospitals to be able to internally trend their performance over time. The dynamic SIR would use data from the previous year, updated annually, to allow for comparative performance. This could be easily accomplished.

While we have seen some improvement in NHSN metrics, the overall trend, in my opinion, is that NHSN is moving towards metrics of lesser value (e.g., lab-based automated metrics), and I get the sense that they're not particularly interested in the viewpoint of hospitals. In the value-based reimbursement era, hospitals need valid comparative performance data more than ever, yet CDC appears out of touch and moving in a completely different direction.

Wednesday, July 6, 2016

A Living Wage and Paid Sick Leave are Infection Control Issues


We've written often about presenteeism, workers staying on the job while they are sick, and have mentioned paid sick leave as a mechanism for keeping sick healthcare workers at home. A few years ago, New York City mandated a minimum of 5 days paid sick leave for all companies with over 20 workers, yet few other cities or states have similar regulations. While most presenteeism discussions focus on workers in specific workplaces, little attention has been paid to home healthcare providers. These individuals face significant time and financial pressures to work while sick, as evidenced by a recent Guardian article examining issues facing working mothers in Denver.

In the first of an election year series of discussion groups, The Guardian asked five working mothers, including three home healthcare workers, about the many barriers they face caring and providing for their families. Several quotes from the article are particularly relevant to discussions of presenteeism and infection prevention:

"As the women discussed, they’re paid less than male colleagues, they often struggle to find reliable childcare, they lack medical leave and they rarely even get paid time off"

"I mean, I cannot miss a day of work, because I have to pay rent."

"they [home healthcare companies] don’t give you benefits, they don’t give gas [money], they don’t pay for my mileage, and you take care of all these sick people in their homes. And then when you get home with this low paycheck, again, you’re struggling. The money’s not enough for us to take care of our family. No vacation, no sick pay, no benefits."

"All the women who were home healthcare aides lacked paid medical leave, which can leave both them and their clients vulnerable." 

"When I go into people’s homes, I need to be well. I take care of people with Aids, I take care of people with cancer. They don’t need to get pneumonia from me. So it’s not just about me as a home care worker, it’s about raising the standard of home care not only for myself, but for the people I take care of."

"I was sick last year with Type A flu virus. I couldn’t take one day, and they told me to wear a mask on my face to go to my client. She’s 84."

Much attention has been paid to social determinants of health and how social and economics factors are associated with poor individual and community health. What is clear from this discussion group is that social determinants of health impact infection control through presenteeism and that no one, not even the wealthy, are protected from infections (e.g. influenza) transmitted in the community by home healthcare workers who have to work while sick to provide for their families.  A living wage and paid sick leave for all workers are critically needed for infection prevention and for many other reasons, as outlined in this excellent, well-timed, discussion group.

Saturday, July 2, 2016

Lab capacity and emerging infections

Question: What do mcr-1 and Candida auris have in common?

Answer: Both are emerging infection threats, and neither can be identified by most hospital laboratories. 

The mcr-1 gene, which confers resistance to colistin and can be plasmid-mediated (and thus easily spread), was first described in November of 2015 but is now known to be present in over 20 countries (it has also been found in archived organisms going back over 30 years). Of course, it didn’t just suddenly appear everywhere at once—it has been spreading for a long time undetected, and will continue to do so. Many laboratories don’t test for colistin susceptibility, and such testing is highly problematic—the properties of the compound interfere with diffusion through agar and cause adherence to the surfaces used for broth dilution trays (resulting in variable performance by test method). There isn’t even agreement on the breakpoint for resistance. Thus those labs that do perform testing only do so on multiple-drug resistant organisms or upon request—and if resistance is detected, the organism must be sent to a reference lab for molecular detection of the gene (CDC recommends sending Enterobacteriaceae with colistin MIC of 4 mcg/mL or higher, unless they are Proteus, Providencia, Morganella, or Serratia, which have intrinsic colistin resistance).

Candida auris is another interesting story. First described in 2009 in the external ear canal of a patient in Japan, C auris has now been detected in 9 different countries, can cause invasive candidiasis in hospitalized patients, has outbreak potential, and can be resistant to all three major antifungal classes. But again, conventional methods for fungal identification do not identify this newer species (biochemical panels most often ID it as C. haemulonii), and many labs never go beyond the “big 5” species (albicans, glabrata, parapsilosis, tropicalis and krusei), calling the rest “other Candida spp”. MALDI-TOF may soon be useful, but for now it is best to look more closely at invasive yeast isolates that ID as related species (e.g. haemulonii) or that are unusually resistant (provided your lab does antifungal susceptibility testing routinely on invasive isolates). 

These two emerging threats reinforce the importance of investment in lab capacity at all levels, from the individual hospital to state, regional and national public health laboratories.

Now to celebrate the 4th of July weekend, a submission from the 2015 ASM Agar Art Challenge:

Tuesday, June 28, 2016

Excess Mortality in CRKp: A Non-Randomized (Fortunately) Trial

Understanding the burden of antimicrobial resistance is critically important if we are to appropriately target research and clinical resources. For years, lack of proper estimates of the morbidity, mortality and costs associated with multidrug-resistant bacteria greatly limited the attention paid to these pathogens. This changed with the 2013 CDC Antibiotic Resistance Threats Report which provided the public with the number 23,000.  In the report, carbapenem-resistant Klebsiella pneumoniae (CRKp) was estimated to cause 7,900 infections and 520 deaths per year. But questions remain: Is the 6.5% (520/7900) mortality estimate high or low and how can we estimate the burden of resistance since we can't (fortunately) perform randomized trials where we randomly infect patients?

To answer these important questions, a group of investigators formed The Consortium on Resistance against Carbapenems in K. pneumoniae (CRACKLE) and just published a cohort study in Clinical Microbiology and InfectionThis group, of what appears to be 18 Great Lakes hospitals, prospectively collected CRKp BSI (N=90), pneumonia (N=49), and UTI (N=121) isolates along with a control group (N=223) of patients with CRKp urinary tract colonization. The use of patients colonized but not infected with the pathogen as controls is interesting. The authors explain that they chose these controls since "non-infection-associated contribution to overall mortality is relatively larger in patients colonized with CRKp compared with patients colonized with more susceptible organisms, since risk factors for mortality such as chronic and acute illness, overlap with risk factors for CRKp colonization. An estimate of this non-infection-related mortality may be approximated in patients who are colonized, but not infected with CRKp." This is another way of saying that they wanted to isolate the attributable mortality risk of infection, not underlying disease.

The primary outcome was time-to-hospital-mortality from the time of the first positive CRKp culture as calculated by an adjusted hazard ratio using Cox proportional hazard models. I've included the unadjusted outcomes below. The full paper includes separate models and Kaplan-Meier curves for each infection, which don't differ greatly from the unadjusted outcomes.


As you can see, 39% of both BSI and pneumonia patients died or were transferred to hospice care compared to 12% of controls giving an attributable mortality of 27% for CRKp infection. In the Cox models, the adjusted hazard ratio was 2.59 (1.52-4.50) for BSI and 3.44 (1.80-6.48) for pneumonia. In contrast CRKp UTI was protective in both the unadjusted (3% lower mortality) and adjusted (aHR=0.68, p=0.33) analyses. This is further evidence that we need to rethink our definitions and focus on UTI.

Overall, a very nice study that utilized a novel control group of patients colonized but not infected with the organism of interest. It is likely that this approach when coupled with multivariable analysis reduced the effects of measured and unmeasured confounders. And it looks like CDC should increase the attributable mortality from 6.5% in their 2013 report to something a bit higher - say 27%.