?

Log in

No account? Create an account
Thoughts Online Magazine
Collected Articles on Culture & Politics
The Statistical Puzzle Over How Much Biomedical Research is Wrong | MIT Technology Review 
24th-Jan-2013 09:34 am
Inspiration
The Statistical Puzzle Over How Much Biomedical Research is Wrong | MIT Technology Review

This presents Jager & Leek's empirical model of statistical failure. However, both the work by Ioannidis and the two articles noted show a far higher failure rate: the two do the hard work of trying to replicate the claims (Amgen and Bayer, both of whom desperately seek workable results for commercial advantage, and who, therefore, are looking at replication closely and with interest), while Ioannidis points out poorly designed experiments. Arguing with the replication studies is foolish, and arguing with Ioannidis about experiment design by saying "But it might have worked" is clueless.

UPDATE: A statistician comments on the paper "I think what Jager and Leek are trying to do is hopeless. So it’s not a matter of them doing it wrong, I just don’t think it’s possible to analyze a collection of published p-values and, from that alone, infer anything interesting about the distribution of true effects. It’s just too assumption-driven. You’re basically trying to learn things from the shape of the distribution, and to get anywhere you have to make really strong, inherently implausible assumptions. These estimates just can’t be “empirical” in any real sense of the word. It’s fine to do some simulations and see what pops up, but I think it’s silly to claim that this has any direct bearing on claims of scientific truth or progress."
Comments 
24th-Jan-2013 06:31 pm (UTC)
Another interesting contributor to the mix is the concept of evidence-based medicine. Yes, you want what you're doing to be backed up with some sort of scientific or technical justification (otherwise you might as well be practising witchcraft), and what better justification than a body of positive results arising from your chosen treatment, but this puts doctors on the horns of a dilemma, this being that if you swallow the doctrine rigidly, you can't do anything new without building a clinical trial around it, and sooner or later (preferably sooner), your colleagues are going to want to see results - if for no other reason than to be able to point to paper X in support of what they're doing.

Add to that the almost obscene fetish for randomised double-blind placebo control as not only the gold standard but the minimal standard that must be met before medical ethicists approve anything these days, and you can understand why medical researchers might be tempted to rush into print before they have a long enough follow-up time or a large (and statistically powerful) enough sample.

A lot of this is the result of (a) the risk-averse and blame-seeking nature of society today (and things are only going to get worse before they get better) coupled with (b) the widespread failure of both doctors and patients to comprehend that for all its advances in science over the years, much of medicine is still an art, and riding the cutting edge all too often involves doing so blindfolded and with no certainty as to the end result.

Finally we have the deliberate frauds skewing things, and I recall reading something recently about some previously revered professor who has since had two hundred of his articles (or articles co-authored by him) pulled because it came out that the data for all of them were either grossly modified to fit expectations or had been cut from whole cloth.

I don't trust any study unless the P value is less than 0.01, and the more zeroes there are after the decimal point, the better I feel about the validity of the conclusions. But in the end, it comes down not just to the numbers but to the meat of the study, something you don't get from just reading the abstract and nodding at the P values they report. When you read between the lines, if the way they describe doing things shows that they didn't have a clue or they weren't at least aware of potential sources of confirmation bias, it doesn't matter if the P value has a million zeroes between the decimal point and the 1; that study isn't worth a pinch of shit.
24th-Jan-2013 06:53 pm (UTC)
One of the more interesting books I got handed was Just Culture: Balancing Safety and Accountability (Amazon link) which discussed the need to balance experience in the field with liability in a way that recognizes experience without demanding the impossible. The background to the book was engineering (where, despite all the calculations you make, it's amazing what can come back and get you), rather than medicine, but I thought he had a good approach to understanding, making allowances, and moving forward. Sort of like the Dennis Prager principle: sue on intent, not negligence. If they meant to do it wrong, that is worth fighting. If it was an accident, let it go.
This page was loaded Nov 21st 2017, 10:57 pm GMT.