How should we calculate influenza vaccine effectiveness?
You know, I probably should've just been happy with the reports that 2012-2013 influenza vaccine was 62% effective and called it a day. But this morning I read this nice report by Helen Branswell of the Canadian Press describing why vaccine-virus match isn't the only factor that impacts vaccine effectiveness and then I made the mistake of looking at the early-release CDC MMWR report more closely. Now, I'm an ID physician epidemiologist and led the influenza response at the University of Maryland, Baltimore and UMMS during the 2009 H1N1 pandemic. Thus, I'm no stranger to reading these reports, but for some reason, today, they just didn't make sense. It occurred to me that perhaps if I'm perplexed, others might be, so I've decided to post my questions and concerns and hopefully, as I get answers, I'll post them here.
Some Background: Read the CDC MMWR report from January 11th. and focus on Table 2 (below). This table summarizes the vaccine effectiveness data for the vaccine vs. influenza A, influenza B and both.
Initial Observations: One, it appears that CDC is using a prospective cohort of 1155 sick patients who presented as outpatients for acute respiratory illness, and not a group (cohort) of all patients eligible for vaccination. Ideally, you'd want to determine the likelihood that influenza vaccine prevents clinical illness, visits to the doctor, hospital admission and mortality. The vaccine effectiveness in the MMWR report can't tell us that and I'll explain why in #1, below. Additionally, the MMWR report determines vaccine-effectiveness using a case-control method and not a cohort method. This might seem to be an esoteric point, but it could have a big influence on how effective we think a vaccine is. I will try to explain this in #2 and #3, below. Finally, I'm not sure how they decided which patients should be included in the uninfected group for their calculations. I'll explain this a bit more in #4 and show how it could bias the estimates of vaccine effectiveness.
1) Why does CDC utilize outpatient, sick controls in their estimates of vaccine efficacy? I suspect this is an issue of expediency and cost-effectiveness. It would be more expensive to enroll 1000 patients in September and track them weekly to see if they get the vaccine and then if they develop symptoms and test them. Of course, they can't easily do randomized studies in the US or elsewhere since the vaccine is recommended for just about everybody, so randomizing to no vaccine would be unethical. Whatever the reason, selecting an entire cohort of patients, already sick enough to visit their doctor, does not tell us how effective the vaccine is in preventing illness, preventing visits to the doctor, preventing hospitalization or preventing death. The MMWR report can only tell us how effective vaccine is in preventing an influenza infection vs. another infection conditional on already being sick enough to go to the doctor's office. What does that mean for people trying to decide if they should get vaccinated?
Also, could it be that selecting this cohort biases the findings in other ways? What if vaccinated patients would be more likely to seek medical care for their symptoms? What if those that develop acute respiratory illness are different or sicker than healthy controls in a systematic way? These could impact the measure of vaccine effectiveness. Additionally, using outpatient, sick controls leaves out two very important groups: hospitalized patients and healthy populations that never developed an illness in the first place. I suspect that declining funding for CDC and other groups is behind this - you get what you pay for. However, none of the reports I've read explain this limitation when reporting vaccine effectiveness. They should.
Note: I've added a second post describing a bit more why CDC selected this cohort of patients.
2) Why does the CDC measure vaccine effectiveness using odds ratios even when they have a cohort of patients? To explain further, a case-control study would be one where they find 1000 (or any number) of influenza positive patients and then look back and see if they were vaccinated and then find another set of 1000 influenza negative controls (healthy, sick, whatever) and see if they were vaccinated. Here they identified a cohort of patients with acute respiratory illness first and then determined their influenza status and vaccine status retrospectively. Thus, this is a "retrospective" cohort study. Just because the cohort was established conditional on them having an outpatient visit for a respiratory complaint, does not invalidate that this is a cohort. This matters since they report odds ratios and not relative risks. And as a reminder, when baseline or initial risk is high, the odds ratio can over-estimate the relative risk. To find out how and why this is important read this BMJ article.
3) Did measuring vaccine effectiveness using an odds ratio (OR) method (as it appears the CDC did) vs. the relative-risk (RR) method, as normally used in cohort studies, matter? The question here is not a theoretical one, as above, but rather I'm asking if we used the exact numbers in the MMWR report but used a cohort or relative-risk method, would we get a different estimate of effectiveness? Short answer: Yes
If we take the table showing attack rates in vaccinated vs unvaccinated for influenza A only (from Table 2 here), we get very different results based on the method used to calculate efficacy.
4) Why did CDC leave the influenza B positive patients out of their calculation of the effectiveness of the vaccine versus influenza A and vice-versa? When looking at Table 2 above, one thing struck me as odd. When doing the three effectiveness calculations, they used the same control group. To me, if you don't have influenza A, you should be included in the "uninfected group" for testing the effectiveness of the vaccine against influenza A. To see if this matters, I added in the 180 patients who were influenza B positive AND influenza A negative that the CDC left out of their calculation. Here is the new 2x2 table:
Some Background: Read the CDC MMWR report from January 11th. and focus on Table 2 (below). This table summarizes the vaccine effectiveness data for the vaccine vs. influenza A, influenza B and both.
Initial Observations: One, it appears that CDC is using a prospective cohort of 1155 sick patients who presented as outpatients for acute respiratory illness, and not a group (cohort) of all patients eligible for vaccination. Ideally, you'd want to determine the likelihood that influenza vaccine prevents clinical illness, visits to the doctor, hospital admission and mortality. The vaccine effectiveness in the MMWR report can't tell us that and I'll explain why in #1, below. Additionally, the MMWR report determines vaccine-effectiveness using a case-control method and not a cohort method. This might seem to be an esoteric point, but it could have a big influence on how effective we think a vaccine is. I will try to explain this in #2 and #3, below. Finally, I'm not sure how they decided which patients should be included in the uninfected group for their calculations. I'll explain this a bit more in #4 and show how it could bias the estimates of vaccine effectiveness.
1) Why does CDC utilize outpatient, sick controls in their estimates of vaccine efficacy? I suspect this is an issue of expediency and cost-effectiveness. It would be more expensive to enroll 1000 patients in September and track them weekly to see if they get the vaccine and then if they develop symptoms and test them. Of course, they can't easily do randomized studies in the US or elsewhere since the vaccine is recommended for just about everybody, so randomizing to no vaccine would be unethical. Whatever the reason, selecting an entire cohort of patients, already sick enough to visit their doctor, does not tell us how effective the vaccine is in preventing illness, preventing visits to the doctor, preventing hospitalization or preventing death. The MMWR report can only tell us how effective vaccine is in preventing an influenza infection vs. another infection conditional on already being sick enough to go to the doctor's office. What does that mean for people trying to decide if they should get vaccinated?
Also, could it be that selecting this cohort biases the findings in other ways? What if vaccinated patients would be more likely to seek medical care for their symptoms? What if those that develop acute respiratory illness are different or sicker than healthy controls in a systematic way? These could impact the measure of vaccine effectiveness. Additionally, using outpatient, sick controls leaves out two very important groups: hospitalized patients and healthy populations that never developed an illness in the first place. I suspect that declining funding for CDC and other groups is behind this - you get what you pay for. However, none of the reports I've read explain this limitation when reporting vaccine effectiveness. They should.
Note: I've added a second post describing a bit more why CDC selected this cohort of patients.
2) Why does the CDC measure vaccine effectiveness using odds ratios even when they have a cohort of patients? To explain further, a case-control study would be one where they find 1000 (or any number) of influenza positive patients and then look back and see if they were vaccinated and then find another set of 1000 influenza negative controls (healthy, sick, whatever) and see if they were vaccinated. Here they identified a cohort of patients with acute respiratory illness first and then determined their influenza status and vaccine status retrospectively. Thus, this is a "retrospective" cohort study. Just because the cohort was established conditional on them having an outpatient visit for a respiratory complaint, does not invalidate that this is a cohort. This matters since they report odds ratios and not relative risks. And as a reminder, when baseline or initial risk is high, the odds ratio can over-estimate the relative risk. To find out how and why this is important read this BMJ article.
3) Did measuring vaccine effectiveness using an odds ratio (OR) method (as it appears the CDC did) vs. the relative-risk (RR) method, as normally used in cohort studies, matter? The question here is not a theoretical one, as above, but rather I'm asking if we used the exact numbers in the MMWR report but used a cohort or relative-risk method, would we get a different estimate of effectiveness? Short answer: Yes
If we take the table showing attack rates in vaccinated vs unvaccinated for influenza A only (from Table 2 here), we get very different results based on the method used to calculate efficacy.
Using the CDC or OR method, vaccine effectiveness (VE) = (1-OR)*100 or (1-ad/bc)*100. Using that method, the calculated VE=53.4% (CDC reports 55% in their table, since they adjusted for site)
Using the RR (cohort) method, the VE = (1-RR)*100 where the RR= (a/(a+b)) / (c/(c+d)). Using that method, the calculated VE=44.1%
This is a very big difference with a 9.3% absolute reduction in effectiveness by method alone! It seems that since site level variation is not a big driver of the effectiveness, the RR approach might be more accurate. Of note, when you do the above analysis for the vaccine vs influenza A or B, the VE falls from 62% using the = OR approach to 47% using the RR approach.
*I hope someone can explain why they are analyzing cohort data using case-control methods. For more information on how I calculated these estimates, see this paper by Walter Orenstein, et al from 1985. It appears this case-control method is standard in the influenza vaccine literature.
4) Why did CDC leave the influenza B positive patients out of their calculation of the effectiveness of the vaccine versus influenza A and vice-versa? When looking at Table 2 above, one thing struck me as odd. When doing the three effectiveness calculations, they used the same control group. To me, if you don't have influenza A, you should be included in the "uninfected group" for testing the effectiveness of the vaccine against influenza A. To see if this matters, I added in the 180 patients who were influenza B positive AND influenza A negative that the CDC left out of their calculation. Here is the new 2x2 table:
Here, if I use the CDC (case-control or odds-ratio method) I find a VE = 41% and if I use the cohort method, I find a VE = 34.4%. These results are so different from those reported in MMWR, that I'd be very interested to know why they chose to leave influenza B patients out.
OK. For influenza A, the vaccine effectiveness was reported as 55% in the MMWR report. Depending on how I calculated the vaccine effectiveness, I found that it ranged from 53.4% to 34.4%, with the more accurate estimate likely closer to 34%. A pretty huge range, don't you think? Perhaps these reports should calculate effectiveness in a number of different ways and provide them in a sensitivity analysis. Better yet, we should fund prospective cohort studies that include healthy patients and measure the true effectiveness of the vaccine. Even better, a universal influenza vaccine would render this all moot, but that's in the future...
ADDENDUM:
Please see the other two posts in this thread: (1) My discussion of the test-negative design and (2) the MMWR author's explanation of why they study influenza vaccine effectiveness the way they do.
This comment has been removed by the author.
ReplyDeletethank you for taking an honest look at the flu vaccine.
ReplyDelete