Wrong answer

This morning I stumbled upon this piece, Wrong Answer (free full text here), by Rachel Aviv in The New Yorker. It's an old article from 2014, but a wonderfully written, compelling, sad tale. It's the story of how high stakes standardized testing of middle school students in economically disadvantaged neighborhoods in Atlanta led to cheating by teachers. It focuses on Damany Lewis, a superb teacher totally committed to his students, who tirelessly worked to improve his students' math knowledge and was successful in doing so, but not successful enough to hit an unreachable goal. Responding to increasing pressure to raise testing scores, he and other teachers began to change the answers on students' tests. We all know that cheating is unethical, and at first glance I bet most of us would argue to punish those involved, but read this entire piece (warning: it's long), and you're likely to soften your stance. The consequences of not meeting unreasonable targets were so severe that the teachers felt compelled to cheat in the best interest of their students.

Now take this article from the education setting into the world of healthcare epidemiology, and if you're like me, there will be chills going down your spine as you read it. It should be required reading for anyone who works in healthcare quality or the key stakeholders in this space, from those at the front lines to those who work in professional societies, and to those who create policy at the state or national level. There are also lessons here for patients and patient advocates.

Donald Campbell
What happened in Atlanta shouldn't surprise us. In 1979, Donald Campbell, a psychologist, published a paper, the crux of which has become known as Campbell's law. I wasn't aware of this until I read Aviv's article. It states: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Moving this to our world, you can delete the word "social" in Campbell's law and take a look at Dan Sexton's commentary, Casablanca Redux, from 2012. Here's an excerpt:
Our informal discussions with other hospital epidemiologists, our experience in evaluating the source of infection in hundreds of bacteremic intensive care unit (ICU) patients, and common sense have led us to suspect that many hospitals do not accurately report their true rates of CLABSI, using current NHSN definitions. In some cases this may reflect an unwillingness of local staff to accept these definitions as accurate or fair; in other situations it may reflect an unconscious desire to hedge or reduce their rate of CLABSI to avoid criticism and negative consequences from their local supervisors in the press, clinicians, or the general public who review their publicly reported data... If clinicians inappropriately or illogically fear or anticipate negative feedback about the rate of CLABSI in their institutions, they may consciously or subconsciously fail to obtain blood culture results for every patient with a possible or likely BSI. Simply put: no culture equals no infection, using standard definitions of CLABSI. 
At some level, we are all complicit in this. And depending on the action, it may not be the wrong thing to do. In fact, it may benefit the patient. For example, better diagnostic stewardship in the form of appropriately ordering fewer urine cultures not only lowers CAUTI rates but reduces antibiotic utilization with several resultant benefits. Still it's important to note that the primary impetus for this was to lower HAI rates. We take into consideration how a new diagnostic test may impact HAI rates and may even allow that to impact the decision to implement (see an excellent paper by Dan on this here). We may allow clinicians to censor infections that infection preventionists have detected even though the cases meet NHSN definitions. And at the extreme, hospitals may engage in practices that may harm patients in order to reduce publicly reported HAI rates. In a recent publication on how physicians in training view quality initiatives, a dirty secret was elicited from a resident during a focus group at an academic medial center: “There’s like the central line infection protocols…. If you suspect that anybody has any type of bacteremia, you don’t do a blood culture, you just do a urine culture and pull the lines … we just don’t even test for it because the quality improvement then like marks you off.” 

While reading Rachel Aviv's paper, I wondered: Do we ever ignore results (i.e., infection rates) that seem too good to be true like the educational administrators in Atlanta did? Do we critically analyze surprisingly good results to the same degree as we do surprisingly bad results? In Damany Lewis' case did the end justify the means? Is there ever a situation where I could be pushed to a similar point as Lewis?

The Atlanta school system harmed students and teachers in a thoughtless quest to improve quality. There were no winners. Sadly, the response to the cheating scandal was to raise the stakes for test scores even higher. With pay for performance the same is happening in health care.


  1. Very interesting post, and reminiscent of reported rates of hand hygiene in most NHS hospitals in excess of 90% compared with a mean compliance in reported literature under 50%!

  2. Very thought provoking. This seems a corollary to 1975's Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure", which Mary Dixon-Woods and I wrote about here as it relates to CLABSI:



Post a Comment

Thanks for submitting your comment to the Controversies blog. To reduce spam, all comments will be reviewed by the blog moderator prior to publishing. However, all legitimate comments will be published, whether they agree with or oppose the content of the post.

Most Read Posts (Last 30 Days)