Why are quality and infection prevention programs like oil & water?

This week, I analyzed our most recent performance in the HOP project. HOP is an acronym for the Hospital Outpatient Quality Data Reporting Program (HOP QDRP), a CMS program that is publicly reported at Hospital Compare. HOP is the outpatient analogue of SCIP (the Surgical Care Improvement Project), though HOP has only 2 metrics--pre-procedure antibiotic selection and appropriate timing of the antibiotic dose. We slice the data by procedure, service and surgeon to look for areas where performance is suboptimal. I received an email from one of our senior surgeons who complained that he had several cases where he was deemed noncompliant because the pre-procedure anitbiotic was not given within the window period 60 minutes prior to incision. He pointed out that the reason for his "noncompliance" was that these were dialysis patients who had received a dose of vancomycin at dialysis the day before. And he was practicing good medicine because the patients would still have a therapeutic level of vancomycin at the time of the procedures (all vascular access procedures). So we posted a query as to why this situation would be considered noncompliant (i.e., could the rules be changed to allow for this situation?). We received prompt responses from the physician in charge of the national project, but he avoided answering the question. After multiple emails back and forth, he finally stated that the surgeon did the right thing, but it would still be deemed noncompliant. He went on to say that hospitals should not use the data in this way (i.e., drill it down to the provider level) and the project leadership could not possibly think of all the exceptions for when an antibiotic should not be given within 60 minutes of incision. Now I have some problems with his thinking--if you are going to publicly report our performance then I think you need to be flexible enough to allow for exceptions that actually reflect good practice, and in an environment of 24/7 communication it shouldn't be hard to have a panel of experts make decisions on requests for exceptions to the rules. With SCIP we've actually been dinged when a pre-op antibiotic was not given before incision for a patient who entered the OR in cardiac arrest! I've blogged before about how these types of problems really turn physicians off not just to these specific projects but to quality improvement projects in general.


All of this made me think some more about the differences in quality improvement and healthcare epidemiology. The table below is modified from a plenary talk I gave at SHEA a few years ago. These differences really become sources of friction when QI and hospital epi folks are pulled into common projects like SCIP and HOP.

Characteristic
Healthcare Epidemiology
Quality Improvement
Philosophic orientation
Modern
Post-modern
Primary influences
Science & medicine
Business
Analytic orientation
Population based
Often case based
Focus
Exploration & analysis
Modification
Primary audience
Internal stakeholders
External stakeholders
Primary task
Define problems, elucidate risk factors
Design & implement interventions
Content expertise
Almost always
Usually not
Strength
Rigorous methodology
& validity
Process design
Approach
Structured,
relatively uniform
Innovative
Delivery style
Instructive
Collaborative
Solutions
Targeted
Empiric
Tactics
Data oriented,
relatively dull
Flashy campaigns, catchy slogans
Perspective
Long term
Short term, evolving
Tempo
Relatively slow
Relatively fast


I don't have any solutions for how to make the groups work together more effectively. But perhaps starting with a recognition that our approaches to problems are different is a start.

Comments

Most Read Posts (Last 30 Days)