The Statistical Puzzle Over How Much Biomedical Research is Wrong | MIT Technology Review
This presents Jager & Leek's empirical model of statistical failure. However, both the work by Ioannidis and the two articles noted show a far higher failure rate: the two do the hard work of trying to replicate the claims (Amgen and Bayer, both of whom desperately seek workable results for commercial advantage, and who, therefore, are looking at replication closely and with interest), while Ioannidis points out poorly designed experiments. Arguing with the replication studies is foolish, and arguing with Ioannidis about experiment design by saying "But it might have worked" is clueless.
UPDATE: A statistician comments on the paper "I think what Jager and Leek are trying to do is hopeless. So it’s not a matter of them doing it wrong, I just don’t think it’s possible to analyze a collection of published p-values and, from that alone, infer anything interesting about the distribution of true effects. It’s just too assumption-driven. You’re basically trying to learn things from the shape of the distribution, and to get anywhere you have to make really strong, inherently implausible assumptions. These estimates just can’t be “empirical” in any real sense of the word. It’s fine to do some simulations and see what pops up, but I think it’s silly to claim that this has any direct bearing on claims of scientific truth or progress."