Late last week, Dr. Jackson, one of the co-authors of the recent MMWR influenza vaccine effectiveness report, sent me an email response to my posts discussing how and why they measure influenza vaccine effectiveness the way they do. I thought that in the interest of fairness, I should post his full email, rather than pasting his response into a largely hidden comment section. I will likely post a follow-up to this at some point. Additionally, I want to thank Dr. Jackson publicly for responding in this fashion, both professionally and academically. The intent of my posts was academic, and I am pleased that he responded the way he did.
Dear Dr. Perencevich,
I am one of the co-authors on the MMWR article on influenza vaccine effectiveness, and I read your blog posts about that article. I believe I can clear up some of your questions. I tried posting this as a comment on the blog, but the website wouldn't let me, so my apologies for e-mailing you instead. Feel free to post this as comment on the website if you are able.
(1) Regarding your first point, this study used what is known as a “test-negative” design. In the test-negative design, we enroll patients with a medically attended acute respiratory illness (MAARI). We then test these enrollees for influenza, and assess who was previously vaccinated and who was not. The test-negative design is based on an assumption that the rate of MAARI caused by pathogens other than influenza is the same in both vaccinated and unvaccinated persons. If this assumption is true, than our test-negative subjects are representative of the population from which the influenza-positive cases came, and our study does give an estimate of how well the vaccine reduces the risk of getting sick enough to visit the doctor.
Although the paper refers to cases and controls, this is not a true case-control study, since a true case-control study requires that we know who is a case and who is not before we sample them. The design is closer to the “indirect cohort” method proposed by Claire Broome for studying pneumococcal vaccine effectiveness [NEJM 1980; 303:549-52].
The real advantage of the test-negative design is that it controls for differences in healthcare-seeking behavior. If we did the full cohort study you proposed, there would be variation among the cohort members in how often they seek healthcare, and these variations would be related both to their likelihood of being vaccinated and to their likelihood of going to the doctor if they got influenza. By only sampling people who come to the doctor, we control for those differences.
(2) Regarding your second point and third points, the relative risk (RR) is not an appropriate measure of association for this study design. The RR is (obviously) a measure of risk, which would be based on the cumulative incidence of disease in some defined cohort. In this study, we do not sample the full cohort; we simply use the influenza negative subjects to estimate the frequency of vaccination in the cohort. Using the RR in this setting would give a biased estimate of vaccine effectiveness, which is seen in the sample calculations you provided.
Sampling in a test-negative design is conceptually similar to incidence density sampling in a case-control study. When using incidence density sampling, the exposure odds ratio is a direct estimate of the incidence rate ratio and not an approximation to the RR.
(3) Finally, regarding your final question about including the flu B cases in the estimate of VE against flu A: As mentioned above, the test-negative design assumes that the rate of non-flu MAARI is the same in vaccinated and unvaccinated persons. If we included flu B in the non-case group, we would be violating this assumption, because the vaccine does protect against B, and the rate of non-flu A MAARI would no longer be the same in vaccinated and unvaccinated persons.
I hope this clears up your questions!
~Mike Jackson, Group Health Research Institute