Don't pull that trigger!

The headline on the front page of the New York Times this week read "Study Finds No Progress in Safety at Hospitals." This article (graphic shown) reported on a paper in this week's New England Journal of Medicine (free text here). In this study, 240 charts from each of 10 hospitals in North Carolina were reviewed using the Institute for Healthcare Improvement's (IHI) Global Trigger Tool. The admissions reviewed spanned the years 2002 to 2007.

Now I didn't know much about the Trigger Tool and the methods section of the paper doesn't give much description, so I looked up the guide, which you can review here. The triggers are 53 different indicators that when observed in the medical record should prompt further review to assess for an adverse event. For example, administration of benadryl is a trigger to look for a drug allergy, which according to IHI is an adverse event. Adverse events are further classified by severity and whether they were preventable. Per the IHI guide, no more than 20 minutes can be spent on the review of any chart (that rule was also observed for the published study).

Per the IHI methodology, healthcare-associated infections are both a trigger and an adverse event. Here is what the guide states (p.17):
Any infection occurring after admission to the hospital is likely an adverse event, especially those related to procedures or devices. Infections that cause admission to the hospital should be reviewed to determine whether they are related to medical care (e.g., prior procedure, urinary catheter at home or in long-term care) versus naturally occurring disease (e.g., community-acquired pneumonia).
Note that HAIs are never defined. Unlike the CDC's National Healthcare Safety Network (NHSN), which defines infections using multiple data points, IHI methodology doesn't guide the reviewer as to case ascertainment. I did a PubMed search this morning and found no studies where the Trigger Tool was compared to NHSN methodology to assess its validity.

So here are some concerns I have about this paper and the Trigger Tool:

  • By design the Trigger Tool is not true surveillance. There is no attempt to detect all instances of harm. Imagine looking at the medical record of a patient who stayed in the hospital for 8 months with a 20-minute time limit. While I can understand how the Trigger Tool might uncover problems in any given hospital using a case-based approach for quality improvement, to look for trends over time using these data doesn't make any sense since there is no attempt to capture all the cases of harm. Of what value is trending incomplete data? I think this harkens back to the philosophical differences between quality improvement and healthcare epidemiology that I've talked about before
  • I have serious concerns regarding the validity of this approach for HAIs. We know how problematic surveillance can be even when using well-delineated case definitions and how poorly administrative claims data perform for HAIs. The IHI approach seems much more analogous to the administrative data approach.
  • In the New England Journal paper the secular trends were shown only for all harms and preventable harms, but not for any of the component harms, such as HAIs. It would be interesting to see the trended data for HAIs. Recall that AHRQ, using administrative claims data, recently published a paper claiming that HAIs are increasing in the US, while CDC, using much more rigorous surveillance methodology, published the opposite conclusion.
  • Generalizability seems to be problematic. In this paper 2,400 hospital records were reviewed from 10 hospitals in a single state. Over the same time period, there were approximately 220 million hospital admissions in the US. That means that about 1 in 100,000 hospital admissions were reviewed (and only partially given the 20-minute rule). While the published paper never attempts to generalize the study findings to the universe of US hospitals, the media certainly did, and the lead author of the study states in the New York Times, “It is unlikely that other regions of the country have fared better.”
  • Some of the instances of "harm" are not preventable and I'm not sure how they are related to quality of care. For example, consider the case of a patient with no known drug allergy who is treated with an antibiotic and develops a rash. This would be classified as a harm, and it is indeed a harm to the patient, but it's not predictable and not preventable. How does it help us to trend such data? And how would we attempt to reduce this harm? It is preventable harm that needs our attention. 
  • With regards to HAIs, even if these data were valid, I don't believe these data reflect the current state of affairs in US hospitals given that much improvement in infection rates has occurred since 2007.
So here we have another paper that beats us up some more. If truth be told, I bet that the quality of care in US hospitals is significantly better today than it was in 2002. It sure would be nice to see that in print, but it probably wouldn't hit the front page. 

P.S. It's amazing that IHI claimed to have saved 123,000 lives in the US due to its safety program for US hospitals, but now claims that during the same time frame there was little evidence of improvement in patient safety. Something doesn't compute......  

Comments

Most Read Posts (Last 30 Days)