The elephant in the lab: how much science is fabricated?
We need to talk about scientific misconduct. Just last week, a widely respected medical researcher admitted to fabricating her results, causing two major heart studies to be retracted.
She’s not alone in this, and we have no idea of the scale of scientific fraud. It’s something that’s incredibly difficult to measure, as obviously nobody wants to admit it until they’re caught – and, even then, you’d imagine some reluctance when their entire career is on the line.
However, the repercussions are significant: it reduces trust in scientific research, and can mean that serious misconceptions become mainstream “fact”, even if they’re later redacted.
However, the repercussions are significant: it reduces trust in scientific research, and can mean that serious misconceptions become mainstream “fact”, even if they’re later redacted.Take the 1998 study into vaccines and autism, The Times uncovered conflicts of interests and fabricated results, and the paper was withdrawn by The Lancet in 2010. Yet its effects are still felt, and even spouted by presidential frontrunners.
Heard of green coffee pills? The manufacturers got a $9 million fine for the studies they used to promote their alleged health benefits, which still get repeated today. Hwang Woo-suk received a prison sentence for falsifying papers and embezzling government research grants on stem cell cloning, but is now back working in animal cloning. Even the dry world of political science isn’t immune, as just last year Science had to retract a paper about the positive impact of gay marriage equality canvassers, after the results appeared to have been completely fabricated.
There are plenty of retractions if you’re willing to look, some high-profile and others relatively unheard of, but the gradual erosion of trust is undeniable – and, when scientific deniers pick and choose the arguments they believe, that’s seriously dangerous.
The editor of The Lancet made waves earlier this year by suggesting that up to 50% of scientific studies could have significant issues: “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”
Not all of this relates to fabrication, of course, but that’s clearly part of it. Indeed, a study from 2009 suggested that 14% of scientists knew someone who had falsified results. Why? The most obvious, yet depressing, reason is “why not?” The benefits of nudging results in favour of the outcome that will lead to greater prestige are simply stronger than the cost of getting caught. The disincentives are limited and unreliable, as the author of a paper on fake research wrote: “Retraction is the strongest sanction that can be applied to published research”, but currently “[it] is a very blunt instrument used for offences both gravely serious and trivial.” The chance of being caught out for manipulating results is suitably slim for it to cease to be an effective deterrent.
“The chance of being caught out for manipulating results is suitably slim for it to cease to be an effective deterrent.”
Why aren’t these papers caught more often? Perhaps there’s a certain amount of naivety – if something is published in a serious journal like Science or Nature, then the assumption is that it must be legitimate. And yet both have had high profile retractions – indeed retractions are more common in major journals – so that doesn’t necessarily hold.
More realistically, the sheer amount of research being pushed out means that not everything can be deconstructed and repeated – and, besides, who would volunteer for the task? In the dog-eat-dog world of academia, time spent testing other people’s research is time not spent advancing your own career.
To fully comprehend why this is, you need to understand the academic career ladder. As a scientist, the pressure is always there to publish papers. The phenomenon is so widely accepted that it has its own motto: “publish or perish”. Academic institutions want researchers who publish interesting, widely read and cited research. If the study you’ve spent years working on doesn’t provide the interesting results you want – no, need – to keep your career on track, then the incentive to cheat becomes overwhelming.
“Scientific fraud is something that’s incredibly difficult to measure.”
Common sense would indicate that this is the main driving force behind fabricated results, but Daniele Fanelli – a senior research scientist at Stanford who has dedicated much of his career to the academic study of scientific misconduct – was keen to highlight the evidence that contradicts this.
“Whilst, in the past, I found empirical support for this idea, more accurate studies of mine seem to lead to an almost opposite conclusion,” he told me, pointing specifically to recent collaborative work, which highlighted that countries with more pressure to publish had lower rates of paper retractions.
The widely held belief that men are more likely to be found guilty of misconduct was also debunked, leaving four interesting correlations: scientific misconduct was higher in countries without research integrity policies, where performance gets cash incentives and in cultures where mutual criticism is hampered. It also seems to be linked to age, with most misconduct caught in the early stages of a career (though, cynically, you could possibly posit that older scientists are just better at not getting caught.)
This would seem to indicate that the solution is more red tape, but there are concerns here too. “Researchers might perceive a risk of having ever more regulations and bureaucracy stifle their work, and perceive their work as being unfairly discredited due to an excess of sensationalism and over-dramatisation of these issues. I completely share these concerns and, in fact, am increasingly trying to emphasise the weaknesses, trade-offs, and potential biases that characterise this field as I would do with any other,” said Fanelli.
“Fanelli’s research suggests that the softer sciences – those with a weaker consensus – are more prone to bias.”
Common sense would also suggest that, the more scrutiny a body of work comes under, the more likely it is to be solid. Something like climate change research has plenty of critics, but it’s rarely accused of being fabricated (barring one major scandal). Indeed, Fanelli’s research suggests that the softer sciences – those with a weaker consensus – are more prone to bias.
Ultimately, Fanelli believes that the work he and other researchers are doing into scientific misconduct should actually make people trust science more, not less: “The commitment of scientists to correct their own and others’ mistakes and to follow reason, evidence and to use the best methods to produce such evidence, is the source of the great respect and trust that people give to science,” he explained.
And is his area of research a cause of resentment in other scientists? “Right from the beginning, I received many emails of encouragement and praise. I might have enemies that I don’t know about because they don’t write to me, of course, but my general impression is that most scientists know that there is something we should talk about and don’t.”
The spotlight is certainly welcome, but I can’t help returning to some of Fanelli’s early research to end on a pessimistic note. Would the two thirds of scientists judged by their colleagues of engaging in shady practices consider the behaviour misconduct, or would they find a way of justifying it? With everything we know about human biases and objectivity, can any human ever be the best judge of their own ethics?