Part 2: Influenza Vaccine Effectiveness - The Test-Negative Control Design

Before going further, I want to highlight the purpose of yesterday's and today's post. Occasionally, it is important to question standard practice in epidemiology when something doesn't make sense to you. I've experienced this first hand 3-4 times in infectious disease epidemiology and consider it an important exercise and critically important to our field. You can read this post if you want to read more about how we've questioned the ID epidemiology dogma around control-group selection, quasi-experimental design methodology and inappropriately controlling for intermediates in the causal-pathway of outcomes studies. Of course, this doesn't prove anything in this instance but rather explains my motive.



Yesterday, I posted my initial thoughts on how the CDC calculated the preliminary influenza vaccine effectiveness for the 2012-2013. I've now had several informal discussions on the phone and in person with several vaccine experts and am starting to understand how/why they conducted the study the way they did. I'm even more convinced now that design considerations (e.g. which patients are included in the cohort) are influencing the statistical analysis, which should be separate considerations. I will explain this further below.

It appears that the current standard study design in vaccine effectiveness (VE) is called the "test-negative" control design. This design is a modified case-control design where controls are required to have been tested for influenza but have been found "test negative". This appears to be a useful design when the specificity of the diagnostic test for influenza is low (Orenstein EW 2007), which is not the case in the current MMWR report. Another reason mentioned for selecting this design is that it is thought to reduce the bias, since vaccinated individuals may be more like to seek medical care vs. unvaccinated individuals. However, this benefit would apply to any patient seen in the outpatient clinic and not necessarily require diagnostic testing for influenza. 

In favor of cohort studies over case-control or test-negative designs, is the fact that "if specificity was 100%, the cohort method VE estimate was equivalent to the true VE, regardless of test sensitivity or ARs of influenza- and non-influenza-ILIs." (Orenstein EW 2007) The Orenstein paper's stated purpose was to study the impact of sensitivity and specificity of influenza testing on VE estimates across these study designs. Any benefits for the test-negative approach appear to evaporate when using a great diagnostic test, like the rt-PCR used in the MMWR report.

From what I can tell, the Orenstein paper is frequently cited to justify the test-negative design, but given current conditions, the design doesn't seem to be as useful as it once was. I also remain unconvinced that a cohort of patients already being seen in a doctor's office can tell us anything about risk factors for "medically attended" influenza. More importantly, I'm still very concerned with how the cohort was analyzed. Even if there are very good reasons to enroll a cohort of patients who presented to a clinic in order to avoid bias from vaccinated patients presenting differentially to clinics and even if all were tested for influenza, it doesn't follow that you need to use an odds-ratio approach to measure vaccine effectiveness when the relative-risk approach is more accurate. It appears that correcting one wrong (differential medical care seeking in vaccinated vs unvaccinated) is leading to a countervailing wrong when the analysis is done incorrectly. If they had just called this a cohort of tested patients or a "tested cohort", perhaps this wouldn't have happened.

Oh, and I'm still not sure why they are excluding influenza B positive patients from their VE calculation  for influenza A. Shouldn't they have to look at each A strain separately then? Well, that's another post for another day.


Comments

  1. Actually the key to the test negative design is the "incidence density" implicit matching. If the controls are selected from the test negatives _at the same time as the case_, then the odds ratio can be shown to be mathematically equal to the relative risk of disease in vaccinated vs unvaccinated, because the "time at risk" for the cases and controls are the same. A paper that shows the proof is Smith Int J Epi 1994 (although there is a typo in the appendix that makes it a little less clear than it should be).

    This doesn't address your other issues, and another one is that a proportion of test negatives are actually false negatives if they present late. The effect of this is to bias the estimate of VE to the null.

    ReplyDelete
  2. Thanks AC. I agree with what you have written. However, in the MMWR report, it doesn't appear that "controls are selected." It appears that they have included the whole cohort of patients tested since neither the MMWR nor CID paper from 2012 describe sampling or selecting of controls. I agree that OR might be shown to be mathematically equal to relative risk (thanks for the reference!), but that statement highlights the primacy of the RR. I have also completed the math for both the RR and OR calculation of vaccine-effectiveness and in this case, they're not equal. The VE falls from 62% to 47%

    ReplyDelete
  3. I meant that the odds ratio in the incidence density sample (controls are still 'selected' by being tested, but yes, it isn't a true case control study) is equal (ie an unbiased estimator) to the relative risk in the hypothetical cohort of all patients at risk of influenza (ie 1 - true vaccine effectiveness).

    The relative risk in the sample is not the same as the odds ratio in the sample, as you've calculated, but the relative risk in the sample is a biased estimator.

    I see Dr Jackson has already replied and explained it more clearly than I have.

    ReplyDelete
  4. The big question is, do we consider this study an incidence study or a prevalence study? In an incidence study (cohort), the odds ratio is thought to have no meaning (Greenland 1987 Am J Ep). In a prevalence study, the prevalence odds ratio approximates the incident ratio only under a steady state (incidence rate is constant over time) and when the exposure has no effect on disease duration. In an outbreak, we wouldn't expect a constant incidence rate and we certainly expect the vaccine to impact duration of disease (i.e. shorter duration with vaccine).

    So in a prevalence study, there could be issues with all measures, but the key factors required for the OR to approximate the incidence rate ratio are not present in this study since there is no steady state and the average duration of disease is not the same in exposed and unexposed. Thus, we can't claim the risk-ratio (RR) is more biased than an OR in this case.

    At least we agree that this isn't a case-control study!

    ReplyDelete
  5. Sorry, I gave you the wrong reference that disputes this: RODRIGUES, L. & KIRKWOOD, B. R. 1990. Case-control designs in the study of common diseases: updates on the demise of the rare disease assumption and the choice of sampling scheme for controls. Int J Epidemiol, 19, 205-13.

    I can give you the mathematical proof but can't easily do formulae in the comments section - I'll email you.

    ReplyDelete

Post a Comment

Thanks for submitting your comment to the Controversies blog. To reduce spam, all comments will be reviewed by the blog moderator prior to publishing. However, all legitimate comments will be published, whether they agree with or oppose the content of the post.

Most Read Posts (Last 30 Days)