Monday, January 28, 2013

The Good, the Bad, and the Ugly

The good Dr. Diekema stopped by Dartmouth-Hitchcock last week to deliver medical grand rounds. He provided a wonderful update on infection prevention. You can click on the screen shot to the right or the link below to view a video of the presentation, which includes his full slide deck. Enjoy!

Source: Dartmouth-Hitchcock Grand Rounds 1/25/2013

Sunday, January 27, 2013

HOP, SCIP: JUNK!

There is an interesting paper in the most recent Clinical Infectious Diseases on the Surgical Care Improvement Project (SCIP) and the Hospital Outpatient Measures Project (HOP). These are national QI projects intended to improve surgical care. The article lays out numerous problems with these projects, yet despite that, results from these projects will now begin to impact hospital reimbursement.

The authors note: "Measures are rolled out before their full impact is assessed, using live hospitals as the testing ground and relying on individuals trying to comply with these measures to troubleshoot. When issues do arise that require the measures to be changed, response times are invariably at least 6 months; meanwhile patients may be at risk, and measures are consistently failed."

It would be very interesting to know what these projects have cost hospitals. At my hospital, we have 1.5 nurse FTE just to abstract the data. Beyond that are thousands of hours of physician and nurse time spent trying to improve compliance with the metrics. And yet, there remains no compelling published data that outcomes have been improved.

Dale Bratzler, the brainchild of the projects, writes a response in the same issue of CID. At best, his response is tepid, and sheds little light on why these projects should be continued. His commentary ends like this: "The specific issues with SCIP performance metrics highlighted by Weston and colleagues are clearly a source of frustration for providers. However, the authors do not provide any evidence that harm has occurred because of implementation of SCIP." Ok, so we wasted millions of dollars, frustrated clinicians, and are about to punish hospitals financially, but we don't think any patients were harmed. Now that's exactly why much of QI is viewed as a joke!

Wednesday, January 23, 2013

Things is gettin' worser - Pediatric Undervaccination

The typical trajectory of public health over the past several centuries has pointed upwards towards improvements including longer life-expectency. However, as we've entered the post-scientific era, where our country receives its medical advice from celebrities, it's not at all surprising that things can move in the wrong direction. Of course, vaccines are a victim of their own successes since absence of a terrible diseases like measles convinces people they don't need to vaccinate their children and also more susceptible to false claims of vaccine side-effects, but I digress.

Authors Jason Glanz and colleagues recently published a matched-cohort study of healthcare utilization in vaccinated vs. under-vaccinated children less than 3 years old. Their primary findings were that undervaccinated children had lower rates of outpatient visits but higher rates of inpatient admissions.  There are also some interesting findings about how parental choice in vaccination is associated with healthcare utilization. What was more interesting to me is how things have changed gotten worse in the birth cohorts over the 5 years studied (2004 to 2008).  In the figure below, time to first vaccination has increased greatly over the 5 years (solid shapes), as have the rates of 'no vaccination'. So things are moving upward, but in this case, that's the wrong direction.



Reference: Glanz JM, Newcomer SR, Narwaney KJ, et al. A Population-Based Cohort Study of Undervaccination in 8 Managed Care Organizations Across the United States. JAMA Pediatr. 2013;():1-8. doi:10.1001/jamapediatrics.2013.502.

Tuesday, January 22, 2013

Influenza Vaccine Effectiveness Study Author Responds

Late last week, Dr. Jackson, one of the co-authors of the recent MMWR influenza vaccine effectiveness report, sent me an email response to my posts discussing how and why they measure influenza vaccine effectiveness the way they do. I thought that in the interest of fairness, I should post his full email, rather than pasting his response into a largely hidden comment section. I will likely post a follow-up to this at some point. Additionally, I want to thank Dr. Jackson publicly for responding in this fashion, both professionally and academically. The intent of my posts was academic, and I am pleased that he responded the way he did.

Dear Dr. Perencevich,

I am one of the co-authors on the MMWR article on influenza vaccine effectiveness, and I read your blog posts about that article. I believe I can clear up some of your questions. I tried posting this as a comment on the blog, but the website wouldn't let me, so my apologies for e-mailing you instead. Feel free to post this as comment on the website if you are able.

 (1) Regarding your first point, this study used what is known as a “test-negative” design. In the test-negative design, we enroll patients with a medically attended acute respiratory illness (MAARI). We then test these enrollees for influenza, and assess who was previously vaccinated and who was not. The test-negative design is based on an assumption that the rate of MAARI caused by pathogens other than influenza is the same in both vaccinated and unvaccinated persons. If this assumption is true, than our test-negative subjects are representative of the population from which the influenza-positive cases came, and our study does give an estimate of how well the vaccine reduces the risk of getting sick enough to visit the doctor.

 Although the paper refers to cases and controls, this is not a true case-control study, since a true case-control study requires that we know who is a case and who is not before we sample them. The design is closer to the “indirect cohort” method proposed by Claire Broome for studying pneumococcal vaccine effectiveness [NEJM 1980; 303:549-52]. The real advantage of the test-negative design is that it controls for differences in healthcare-seeking behavior. If we did the full cohort study you proposed, there would be variation among the cohort members in how often they seek healthcare, and these variations would be related both to their likelihood of being vaccinated and to their likelihood of going to the doctor if they got influenza. By only sampling people who come to the doctor, we control for those differences.

(2) Regarding your second point and third points, the relative risk (RR) is not an appropriate measure of association for this study design. The RR is (obviously) a measure of risk, which would be based on the cumulative incidence of disease in some defined cohort. In this study, we do not sample the full cohort; we simply use the influenza negative subjects to estimate the frequency of vaccination in the cohort. Using the RR in this setting would give a biased estimate of vaccine effectiveness, which is seen in the sample calculations you provided.

Sampling in a test-negative design is conceptually similar to incidence density sampling in a case-control study. When using incidence density sampling, the exposure odds ratio is a direct estimate of the incidence rate ratio and not an approximation to the RR. 

(3) Finally, regarding your final question about including the flu B cases in the estimate of VE against flu A: As mentioned above, the test-negative design assumes that the rate of non-flu MAARI is the same in vaccinated and unvaccinated persons. If we included flu B in the non-case group, we would be violating this assumption, because the vaccine does protect against B, and the rate of non-flu A MAARI would no longer be the same in vaccinated and unvaccinated persons.

 I hope this clears up your questions!
~Mike Jackson, Group Health Research Institute

Monday, January 21, 2013

Semmelweis was right!

We do a lot of blogging about hand hygiene, and Eli frequently points out the dearth of well-designed studies examining improvement approaches and/or the relationship between HH improvement and reduced infection rates.

In that context, I’d like to recommend this quasi-experimental study from Kathy Kirkland and her colleagues at Dartmouth, published last month in the BMJ Quality & Safety journal. Eli can inform us as to whether it would have “made the cut” for the 2011 Cochrane Review, but I really like this report. The interventions and setting are clearly described, more than one measure of HH is used (direct observation + product use), infection rates are reported over the entire time period, and the discussion section is thoughtful.

There are limitations to the work, but I encourage those who haven’t yet seen it to read it. Mike can add his experience here, as I think his group does more monthly observations, and has had a similar HH journey. I’ve also long been skeptical that rates of 90% can be achieved or sustained in settings where observations are truly clandestine (as I think in most hospitals the HH observers quickly become quite familiar to unit personnel). I’d love to be disabused of this skepticism, if it is indeed misplaced.


Figure from Kirkland KB, et al. BMJ Qual Saf 2012;21:1019-26.

Thursday, January 17, 2013

Wednesday, January 16, 2013

Proof.

Finally, the purists out there who require demonstration of efficacy by a randomized clinical trial before attempting a novel therapy can now breathe a great sigh of relief. The New England Journal of Medicine has just published an RCT that demonstrates the clinical utility of fecal transplantation for C. difficile infection. In fact, fecal transplants worked so well that the trial was terminated early after an interim analysis.

Patients in the study all had C. difficile infection with at least one relapse. They were randomized to one of three study arms: (1) a 4-day course of oral vancomycin followed by bowel lavage then fecal transplant via nasoduodenal tube; (2) a 14-day course of oral vancomycin; or (3) oral vancomycin plus bowel lavage. In the transplant group, 13 of 16 patients were cured after 1 fecal infusion (2 of the remaining 3 were cured after a second infusion). In contrast only 4 of 13 in the vanco group, and 3 of 13 in the vanco plus lavage group were cured. Bottom line: fecal transplantation had an overall cure rate of 96%.

There remain two barriers for patients to access this highly effective therapy: (1) very few physicians perform the procedure, in part, I think, because there is no reimbursement despite the several person-hours required to prepare the fecal solution and administer it; and (2) insurance companies will not reimburse for donor testing, which costs approximately $1500.

So we've proven what we already knew. Now it's time to look at more interesting questions: does fecal transplantation work for irritable bowel syndrome and inflammatory bowel disease?

Graphic:  Andrea Levy, Cleveland Plain Dealer.

Tuesday, January 15, 2013

Part 2: Influenza Vaccine Effectiveness - The Test-Negative Control Design

Before going further, I want to highlight the purpose of yesterday's and today's post. Occasionally, it is important to question standard practice in epidemiology when something doesn't make sense to you. I've experienced this first hand 3-4 times in infectious disease epidemiology and consider it an important exercise and critically important to our field. You can read this post if you want to read more about how we've questioned the ID epidemiology dogma around control-group selection, quasi-experimental design methodology and inappropriately controlling for intermediates in the causal-pathway of outcomes studies. Of course, this doesn't prove anything in this instance but rather explains my motive.



Yesterday, I posted my initial thoughts on how the CDC calculated the preliminary influenza vaccine effectiveness for the 2012-2013. I've now had several informal discussions on the phone and in person with several vaccine experts and am starting to understand how/why they conducted the study the way they did. I'm even more convinced now that design considerations (e.g. which patients are included in the cohort) are influencing the statistical analysis, which should be separate considerations. I will explain this further below.

It appears that the current standard study design in vaccine effectiveness (VE) is called the "test-negative" control design. This design is a modified case-control design where controls are required to have been tested for influenza but have been found "test negative". This appears to be a useful design when the specificity of the diagnostic test for influenza is low (Orenstein EW 2007), which is not the case in the current MMWR report. Another reason mentioned for selecting this design is that it is thought to reduce the bias, since vaccinated individuals may be more like to seek medical care vs. unvaccinated individuals. However, this benefit would apply to any patient seen in the outpatient clinic and not necessarily require diagnostic testing for influenza. 

In favor of cohort studies over case-control or test-negative designs, is the fact that "if specificity was 100%, the cohort method VE estimate was equivalent to the true VE, regardless of test sensitivity or ARs of influenza- and non-influenza-ILIs." (Orenstein EW 2007) The Orenstein paper's stated purpose was to study the impact of sensitivity and specificity of influenza testing on VE estimates across these study designs. Any benefits for the test-negative approach appear to evaporate when using a great diagnostic test, like the rt-PCR used in the MMWR report.

From what I can tell, the Orenstein paper is frequently cited to justify the test-negative design, but given current conditions, the design doesn't seem to be as useful as it once was. I also remain unconvinced that a cohort of patients already being seen in a doctor's office can tell us anything about risk factors for "medically attended" influenza. More importantly, I'm still very concerned with how the cohort was analyzed. Even if there are very good reasons to enroll a cohort of patients who presented to a clinic in order to avoid bias from vaccinated patients presenting differentially to clinics and even if all were tested for influenza, it doesn't follow that you need to use an odds-ratio approach to measure vaccine effectiveness when the relative-risk approach is more accurate. It appears that correcting one wrong (differential medical care seeking in vaccinated vs unvaccinated) is leading to a countervailing wrong when the analysis is done incorrectly. If they had just called this a cohort of tested patients or a "tested cohort", perhaps this wouldn't have happened.

Oh, and I'm still not sure why they are excluding influenza B positive patients from their VE calculation  for influenza A. Shouldn't they have to look at each A strain separately then? Well, that's another post for another day.


Monday, January 14, 2013

How should we calculate influenza vaccine effectiveness?

You know, I probably should've just been happy with the reports that 2012-2013 influenza vaccine was 62% effective and called it a day. But this morning I read this nice report by Helen Branswell of the Canadian Press describing why vaccine-virus match isn't the only factor that impacts vaccine effectiveness and then I made the mistake of looking at the early-release CDC MMWR report more closely.  Now, I'm an ID physician epidemiologist and led the influenza response at the University of Maryland, Baltimore and UMMS during the 2009 H1N1 pandemic. Thus, I'm no stranger to reading these reports, but for some reason, today, they just didn't make sense. It occurred to me that perhaps if I'm perplexed, others might be, so I've decided to post my questions and concerns and hopefully, as I get answers, I'll post them here.

Some Background: Read the CDC MMWR report from January 11th. and focus on Table 2 (below). This table summarizes the vaccine effectiveness data for the vaccine vs. influenza A, influenza B and both.


Initial Observations: One, it appears that CDC is using a prospective cohort of 1155 sick patients who presented as outpatients for acute respiratory illness, and not a group (cohort) of all patients eligible for vaccination. Ideally, you'd want to determine the likelihood that influenza vaccine prevents clinical illness, visits to the doctor, hospital admission and mortality. The vaccine effectiveness in the MMWR report can't tell us that and I'll explain why in #1, below. Additionally, the MMWR report determines vaccine-effectiveness using a case-control method and not a cohort method. This might seem to be an esoteric point, but it could have a big influence on how effective we think a vaccine is. I will try to explain this in #2 and #3, below. Finally, I'm not sure how they decided which patients should be included in the uninfected group for their calculations. I'll explain this a bit more in #4 and show how it could bias the estimates of vaccine effectiveness.

1) Why does CDC utilize outpatient, sick controls in their estimates of vaccine efficacy?  I suspect this is an issue of expediency and cost-effectiveness. It would be more expensive to enroll 1000 patients in September and track them weekly to see if they get the vaccine and then if they develop symptoms and test them. Of course, they can't easily do randomized studies in the US or elsewhere since the vaccine is recommended for just about everybody, so randomizing to no vaccine would be unethical. Whatever the reason, selecting an entire cohort of patients, already sick enough to visit their doctor, does not tell us how effective the vaccine is in preventing illness, preventing visits to the doctor, preventing hospitalization or preventing death. The MMWR report can only tell us how effective vaccine is in preventing an influenza infection vs. another infection conditional on already being sick enough to go to the doctor's office. What does that mean for people trying to decide if they should get vaccinated?

Also, could it be that selecting this cohort biases the findings in other ways? What if vaccinated patients would be more likely to seek medical care for their symptoms? What if those that develop acute respiratory illness are different or sicker than healthy controls in a systematic way?  These could impact the measure of vaccine effectiveness.  Additionally, using outpatient, sick controls leaves out two very important groups: hospitalized patients and healthy populations that never developed an illness in the first place.  I suspect that declining funding for CDC and other groups is behind this - you get what you pay for. However, none of the reports I've read explain this limitation when reporting vaccine effectiveness. They should.

Note: I've added a second post describing a bit more why CDC selected this cohort of patients.

2) Why does the CDC measure vaccine effectiveness using odds ratios even when they have a cohort of patients?  To explain further, a case-control study would be one where they find 1000 (or any number) of influenza positive patients and then look back and see if they were vaccinated and then find another set of 1000 influenza negative controls (healthy, sick, whatever) and see if they were vaccinated. Here they identified a cohort of patients with acute respiratory illness first and then determined their influenza status and vaccine status retrospectively. Thus, this is a "retrospective" cohort study. Just because the cohort was established conditional on them having an outpatient visit for a respiratory complaint, does not invalidate that this is a cohort. This matters since they report odds ratios and not relative risks. And as a reminder, when baseline or initial risk is high, the odds ratio can over-estimate the relative risk. To find out how and why this is important read this BMJ article.

3) Did measuring vaccine effectiveness using an odds ratio (OR) method (as it appears the CDC did) vs. the relative-risk (RR) method, as normally used in cohort studies, matter? The question here is not a theoretical one, as above, but rather I'm asking if we used the exact numbers in the MMWR report but used a cohort or relative-risk method, would we get a different estimate of effectiveness?  Short answer: Yes

If we take the table showing attack rates in vaccinated vs unvaccinated for influenza A only (from Table 2 here), we get very different results based on the method used to calculate efficacy.


Using the CDC or OR method, vaccine effectiveness (VE) = (1-OR)*100 or (1-ad/bc)*100. Using that method, the calculated VE=53.4% (CDC reports 55% in their table, since they adjusted for site)

Using the RR (cohort) method, the VE = (1-RR)*100 where the RR= (a/(a+b)) / (c/(c+d)). Using that method, the calculated VE=44.1%

This is a very big difference with a 9.3% absolute reduction in effectiveness by method alone! It seems that since site level variation is not a big driver of the effectiveness, the RR approach might be more accurate. Of note, when you do the above analysis for the vaccine vs influenza A or B, the VE falls from 62% using the = OR approach to 47% using the RR approach.

*I hope someone can explain why they are analyzing cohort data using case-control methods. For more information on how I calculated these estimates, see this paper by Walter Orenstein, et al from 1985. It appears this case-control method is standard in the influenza vaccine literature.

4) Why did CDC leave the influenza B positive patients out of their calculation of the effectiveness of the vaccine versus influenza A and vice-versa? When looking at Table 2 above, one thing struck me as odd. When doing the three effectiveness calculations, they used the same control group. To me, if you don't have influenza A, you should be included in the "uninfected group" for testing the effectiveness of the vaccine against influenza A.  To see if this matters, I added in the 180 patients who were influenza B positive AND influenza A negative that the CDC left out of their calculation. Here is the new 2x2 table:


Here, if I use the CDC (case-control or odds-ratio method) I find a VE = 41% and if I use the cohort method, I find a VE = 34.4%. These results are so different from those reported in MMWR, that I'd be very interested to know why they chose to leave influenza B patients out.

OK. For influenza A, the vaccine effectiveness was reported as 55% in the MMWR report. Depending on how I calculated the vaccine effectiveness, I found that it ranged from 53.4% to 34.4%, with the more accurate estimate likely closer to 34%.  A pretty huge range, don't you think?  Perhaps these reports should calculate effectiveness in a number of different ways and provide them in a sensitivity analysis.  Better yet, we should fund prospective cohort studies that include healthy patients and measure the true effectiveness of the vaccine. Even better, a universal influenza vaccine would render this all moot, but that's in the future...

Saturday, January 12, 2013

CDC versus Google Trends


Influenza is capturing a lot of media attention this week. Boston declared a public health emergency, activity is widespread in all but two states, and I’ve heard reports of sporadic shortages (of masks, lab testing supplies, pediatric oseltamivir preparations, etc.). Meanwhile, Google Flu Trends (which tracks flu activity via internet searches) has been “blinking red” for several weeks. This Slate piece asks an interesting question: should we be paying more attention to the Google data, or….should we have paid more attention to it as the trend line started ramping up in late November? I suppose the answer depends upon what we could have done earlier to blunt the impact of what the CDC predicts to be a “moderately severe” flu season. Push the vaccine more aggressively, and otherwise get a head start on local preparation, including supplies of masks, antivirals, vaccine, etc.? Promote social distancing or provide advance warning against presenteeism (both at work and school)?

I can’t conclude this post without pointing out that our colleague, Phil Polgreen, published his work on influenza prediction using Yahoo search data before the oft-cited Google paper was published. Phil followed this with some crazy-good Twitter work, too. Credit where credit is due, and all that.

Tuesday, January 8, 2013

What counts? An ethnographic study of counting central line infections

Special Guest Post by: Mary Dixon-Woods, University of Leicester

Surveillance of infections depends, of course, on being able to count them. Recent years have seen several studies (here, here and here) revealing differences in how central line infections get counted. Problems of non-comparability of infection data didn’t matter so much when institutions were conducting surveillance for their own internal purposes. Now that infection rates have become a high stakes metric, tied to financial and reputational sanctions, comparability has become much more important. But why differences in measurement occur is not clear, and one suspicion is that “gaming” – where healthcare workers deliberately manipulate the data – might be responsible. We set out to find out more.

We used the opportunity presented by Matching Michigan, a programme modeled largely on the iconic Michigan-Keystone project, to explore what English hospitals did when asked to collect and report their ICU-acquired central line infection rates. Funded by the Health Foundation, a major UK charitable foundation, we used ethnographic methods involving direct observations in intensive care units, interviews with hospital staff, and documentary analysis.

The results, published in the recent Milbank Quarterly,  showed that even though all hospitals in the programme were given standardised definitions based on those used by the US CDC, variability in how they gathered and interpreted the data was rife. But gaming played little or no role in explaining what happened. Most variability in fact arose because of what we called “mundane” reasons. These included challenges in setting up data collection systems (we identified three distinct systems in use), different practices in sending blood samples for analysis, and difficulties in deciding the source of infections. 

One interesting and unusual feature of Matching Michigan was that it asked units to distinguish, based on the CDC definitions, between catheter-associated infections and catheter-related infections. The difference between them relates to the standard of evidence needed to establish whether an infection originated in the central line or not.

To satisfy the catheter-associated infection, only one blood sample (or catheter tip) is needed, plus a clinical judgement about where the infection is coming from. This definition tends to increase sensitivity at the expense of specificity, but that may be good enough if the main purpose is to guide clinical decisions and provide a reasonable estimate of how well infections are being controlled. Satisfying the catheter-related definition, on the other hand, requires two blood samples: one from the central line and one from somewhere else in the body. Both must test positive for the same micro-organism, determined using semi-quantitative or quantitative techniques. While it provides a better standard of proof, it requires a lot more resources; many laboratories in England were not equipped to support it, and didn’t necessarily consider it a clinical priority to do so. A majority of infections reported to the program therefore relied on the catheter-associated definition.

Using the catheter-associated infection definition, because it relied on clinical judgement, invited inevitable ambiguities about what counted as a central line infection. Those decisions were typically made by ICU physicians (not infection preventionists as in the original Keystone study), and those decisions were made using different kinds of evidence in different ways. For instance, physicians varied in their practices for routine screening of catheter tips; in their propensity to initiate treatment in cases of suspected infection; and in the number and kinds of samples they sent to laboratories for analysis. This meant that those charged with counting the infections were using very different source information.

Despite absence of evidence of deliberate manipulation, the fact that data had to be reported externally to the programme did appear to have some bearing on “what counted”, though not in consistently predictable ways. For instance, some units decided that patients at low risk for central line infections (such as those who had a line in for just a few hours after cardiac surgery) were not eligible for counting in the programme, while others excluded those seen to be unusual in some way (such as those with multiple lines in after complex surgery). This meant that neither the denominators nor the numerators for calculating the infection rates were counted in exactly the same ways across all the ICUs in the programme.

We concluded that unless hospitals are deploying the same methods to generate the data, using their reported rates to produce league tables or performance or impose financial sanctions is probably not appropriate. Much more needs to be done to ensure that reported infection rates are credible, useful, fully integrated with clinical priorities, and comparable.

claimtoken-50eef241193c3

Wednesday, January 2, 2013

TB Infection Control in Resource-Limited Settings

CDC has produced a new PEPFAR-funded video, as part of an implementation package, designed to assist with TB infection control. It's based on recent (2009) WHO guidelines. There is also an active discussion group that covers TB infection control. h/t Philip Lederer

 
claimtoken-50eef241193c3

Tuesday, January 1, 2013

Rosie reincarnated

As a child I never liked cartoons much, with one exception--The Jetsons. The Jetsons were a family that lived in Orbit City in 2062. The parents were George and Jane, and the kids were Judy and Elroy. But there was also the family dog, Astro, and their cleaning lady, Rosie the Robot.

I couldn't help but think of Rosie as I read an article on the use of disinfecting robots at Johns Hopkins Hospital in today's Baltimore Sun. These robots seem to be all the rage. If you want to dig a little deeper, Clinical Infectious Diseases has the full account of the study done by Trish Perl's group. In the study, rooms that had previous occupants with MDROs were subjected to standard cleaning in 3 hospital units. In 3 other units, rooms were disinfected by hydrogen peroxide vapor producing robots. Subsequent room occupants were assessed for MDROs to determine whether transmission to the new occupant had occurred. The investigators demonstrated a significant reduction for VRE in subsequent occupants, but no significant decrease for MRSA, C. difficile, or MDR-gram negative rods. Of note, the study was partially funded via in-kind services by the robot maker. An excellent editorial by Cliff McDonald and Matt Arduino accompanies the paper.

The bottom line is that the authors demonstrated an absolute 6% reduction in VRE acquisition with use of robotic disinfection. Does that result warrant jumping on the robot band wagon? I don't think so for several reasons. First, with the exception of a few special patient populations (e.g., oncology and transplant patients), VRE has little pathogenic potential. Second, only a fraction of newly colonized VRE patients will develop infection. Third, a big problem with the study is that molecular typing was not performed. A few years ago, one of my IPs noted that 4 consecutive patients housed in the same room in our medical ICU all developed VRE bloodstream infections. I was certain that we had a problem with suboptimal cleaning between patients. Fortunately, we had all the isolates, and when molecular testing was performed we surprisingly found that the isolates were all genetically very distinct. It seems that VRE is an organism that while transmissible is also one for which antibiotic pressure is always creating new strains in individual patients. Lastly, even if this technology were perfect and rendered a room absolutely sterile, within seconds of the robot leaving and the humans returning, the room will once again be contaminated. It reminds me of the hysterical response by grade schools several years ago of shutting down a school and bleaching it after a child was found to have MRSA.

So to any hospital thinking about hiring Rosie, ponder long and hard, and consider taking all that money and using it to drive hand hygiene compliance to a new level. Remember, in health care, the hands remain the final common pathway.

Happy New Year!