A recent high-profile story in the NYT by Carl Zimmer discussed a surge in article retractions since 2000. Over the period 2000 to 2009, retractions have risen by 3300%. (see calcs below) The Time's article draws much of its data and discussion from a pair of editorials published by Ferric Fang and Arturo Casadevall covering "structural reforms" and "methodological and cultural reforms" and a recent news feature in Nature. I highly recommend the two calls for reform from Fang and Casadevall, which cover topics ranging from increased administrative burden to problems with grant review. However, it is hard to make a link from any of these problems and scientific error or misconduct without further research.
For example, much of the discussion falls into the category of "survival of the fittest" with scientists doing everything possible to publish in high-profile journals (e.g. NEJM), which then is supposed to lead to increased grant funding and wealth. I'm not denying that this could be a factor in scientific fraud, but it's hard to imagine this would increase errors, directly. If you look at the NYT graphic (above), fraud is associated with a minority of all retractions. To provide evidence that a scientists desire for fame is driving the retraction epidemic, Fang and Casadevall, published another editorial that included an analysis linking impact factor of the journal to a retraction index. They found that the higher the impact factor, the higher rate of retraction with NEJM at the top.
Reasons given for the association include higher risk-taking by authors in papers submitted to high-ranking journals and that publications in high-impact journals may just be subject to greater scrutiny (e.g. via social media). These reasons are appealing, although I'm not sure there's a testable hypothesis among them. One testable "systemic aspect of the scientific publication process" that is likely to be associated with both journal rank and retraction (and more likely to be causal) is a manuscript's provenance.
In the art world, a painting's provenance refers to the chronology of ownership of a specific painting or work. When I speak of a manuscript's provenance, I mean where it's been submitted and received peer review prior to publication. As anyone who has submitted a manuscript knows, you almost always submit the paper to a higher-impact journal first and then if not reviewed or accepted, you aim a bit lower. In fact, there is only one paper we've submitted out of >100, that's gone initially to a lower-ranked journal and then when rejected, was submitted and accepted by a higher-ranked journal.
Thus, manuscripts submitted and accepted at higher-ranked journals are more likely to have been reviewed only once, or received one round of peer review. Yes, I know that some papers go to JAMA then NEJM and some papers are submitted directly to a specialty journal and are reviewed once, but in general, papers published in higher-ranked journals have been through fewer rounds of peer review. Unfortunately, these data are not easily available, but it's likely this hypothesis can be tested. Nevertheless, I think quantity of peer review is an important predictor of quality of the final product. It is unlikely that more peer review could detect outright fraud, but most retractions aren't fraud related.
So before we blame the entire system for a few bad apples, there need to be more epidemiological studies as to why this is occurring. My first suggestion is that we should consider manuscript provenance as a factor and my second suggestion is that journals should pay for peer review, so that it becomes a valued exercise. Peer review service should also be considered for promotion. Sure the system could be improved in many ways, but it's a stretch from there to finding a causal pathway between a journal's impact factor and a higher retraction index. Oh, and this is all probably because of twitter anyway.
***Retraction Rate Calculation: Using rates of 3/year in 2000 to 180/year in 2009 from the NYT article and given that there's been a 64% increase
in PubMed articles (529,000 in 2000 to 866,000 articles in 2010), this
represents an increase from 0.0006% to 0.02%, or a 3300% increase.