Before going further, I want to highlight the purpose of yesterday's and today's post. Occasionally, it is important to question standard practice in epidemiology when something doesn't make sense to you. I've experienced this first hand 3-4 times in infectious disease epidemiology and consider it an important exercise and critically important to our field. You can read this post if you want to read more about how we've questioned the ID epidemiology dogma around control-group selection, quasi-experimental design methodology and inappropriately controlling for intermediates in the causal-pathway of outcomes studies. Of course, this doesn't prove anything in this instance but rather explains my motive.
Yesterday, I posted my initial thoughts on how the CDC calculated the preliminary influenza vaccine effectiveness for the 2012-2013. I've now had several informal discussions on the phone and in person with several vaccine experts and am starting to understand how/why they conducted the study the way they did. I'm even more convinced now that design considerations (e.g. which patients are included in the cohort) are influencing the statistical analysis, which should be separate considerations. I will explain this further below.
It appears that the current standard study design in vaccine effectiveness (VE) is called the "test-negative" control design. This design is a modified case-control design where controls are required to have been tested for influenza but have been found "test negative". This appears to be a useful design when the specificity of the diagnostic test for influenza is low (Orenstein EW 2007), which is not the case in the current MMWR report. Another reason mentioned for selecting this design is that it is thought to reduce the bias, since vaccinated individuals may be more like to seek medical care vs. unvaccinated individuals. However, this benefit would apply to any patient seen in the outpatient clinic and not necessarily require diagnostic testing for influenza.
In favor of cohort studies over case-control or test-negative designs, is the fact that "if specificity was 100%, the cohort method VE estimate was equivalent to the true VE, regardless of test sensitivity or ARs of influenza- and non-influenza-ILIs." (Orenstein EW 2007) The Orenstein paper's stated purpose was to study the impact of sensitivity and specificity of influenza testing on VE estimates across these study designs. Any benefits for the test-negative approach appear to evaporate when using a great diagnostic test, like the rt-PCR used in the MMWR report.
From what I can tell, the Orenstein paper is frequently cited to justify the test-negative design, but given current conditions, the design doesn't seem to be as useful as it once was. I also remain unconvinced that a cohort of patients already being seen in a doctor's office can tell us anything about risk factors for "medically attended" influenza. More importantly, I'm still very concerned with how the cohort was analyzed. Even if there are very good reasons to enroll a cohort of patients who presented to a clinic in order to avoid bias from vaccinated patients presenting differentially to clinics and even if all were tested for influenza, it doesn't follow that you need to use an odds-ratio approach to measure vaccine effectiveness when the relative-risk approach is more accurate. It appears that correcting one wrong (differential medical care seeking in vaccinated vs unvaccinated) is leading to a countervailing wrong when the analysis is done incorrectly. If they had just called this a cohort of tested patients or a "tested cohort", perhaps this wouldn't have happened.
Oh, and I'm still not sure why they are excluding influenza B positive patients from their VE calculation for influenza A. Shouldn't they have to look at each A strain separately then? Well, that's another post for another day.