Did Google Flu Trends Miss the Mark This Year?
There is a nice review of how various influenza-like illness surveillance systems performed during the 2012-2013 influenza season by Declan Butler at Nature.com. There is some good news and some bad news. The good news is that the timing of the start and peak of the epidemic appear pretty well-aligned between the three methods. However, it appears that the "Google Flu Trends" peak was much higher than the CDC and "Flu Near You" peaks. Reasons for the possible divergence include timing of non-influenza ILIs in the community, a bad norovirus season and excess media coverage including CDC's alert of a bad/early season in early December 2012.
I think these reasons and others given in the article have a lot of face validity and certainly follow what many were feeling as the epidemic progressed. One thing I haven't seen in the coverage is whether the "peak" count of ILI is all that important. I'm more interested in the timing of the epidemic's beginning and the peak and not so interested in an absolute count. I suspect that the absolute number of infections helps in future years, but the timing is more important in the current year. I hope someone will comment or covertly tell me why the peak amount matters so much. It's not like ILI is the greatest definition anyway.
Another assumption I haven't seen discussed is whether Google Flu Trends actually got it wrong or was it the other methods which missed the mark. When you have an imperfect definition, it's hard to call any of these three methods a gold standard. Any attempt to pick a winner seems a bit arbitrary. If the intent is to match the CDC so rates can be compared from year-to-year, that's one thing. But what if we are burdening ourselves with a suboptimal gold standard. A lot of sunk costs go into any legacy system, but it's at least worth wondering occasionally if the legacy is worth continuing.
Image source: http://www.nature.com/news/when-google-got-flu-wrong-1.12413
I think these reasons and others given in the article have a lot of face validity and certainly follow what many were feeling as the epidemic progressed. One thing I haven't seen in the coverage is whether the "peak" count of ILI is all that important. I'm more interested in the timing of the epidemic's beginning and the peak and not so interested in an absolute count. I suspect that the absolute number of infections helps in future years, but the timing is more important in the current year. I hope someone will comment or covertly tell me why the peak amount matters so much. It's not like ILI is the greatest definition anyway.
Another assumption I haven't seen discussed is whether Google Flu Trends actually got it wrong or was it the other methods which missed the mark. When you have an imperfect definition, it's hard to call any of these three methods a gold standard. Any attempt to pick a winner seems a bit arbitrary. If the intent is to match the CDC so rates can be compared from year-to-year, that's one thing. But what if we are burdening ourselves with a suboptimal gold standard. A lot of sunk costs go into any legacy system, but it's at least worth wondering occasionally if the legacy is worth continuing.
Image source: http://www.nature.com/news/when-google-got-flu-wrong-1.12413
Comments
Post a Comment
Thanks for submitting your comment to the Controversies blog. To reduce spam, all comments will be reviewed by the blog moderator prior to publishing. However, all legitimate comments will be published, whether they agree with or oppose the content of the post.