illustration from the Pellegrino Center for Clinical Bioethics blog
Over the past few years, there have been calls to base bioethics more firmly on empirical analysis “to promote more balanced, and evidence-based, bioethics”. Ideally, this would do away with the need for woolly, unverifiable “normative discourse” – in other words, moral reasoning.
Say, for instance, we are debating infant euthanasia. We might gather evidence about survival rates for children born with serious birth defects to decide whether continue treatment. We might do cost estimates of caring for these infants over their lifetime. We might study the impact upon the mental health of couples caring for such children. Even if these figures did not determine the outcome of a hospital’s decision, they would be influential.
However, an article in the journal Royal Society Open Science should make bioethicists think twice about how successful evidence-based morality could ever be.
Paul Smaldino, a cognitive scientist at the University of California, Merced, and a colleague argue that “In fields such as psychology, neuroscience and medicine, practices that increase false discoveries remain not only common, but normative”. In short, false results are not exceptional but to be expected.
From this shocking assertion he goes on to argue that a kind of Darwinian process of natural selection leads not to truth, but to bad science.
some of the most powerful incentives in contemporary science actively encourage, reward and propagate poor research methods and abuse of statistical procedures. We term this process the natural selection of bad science to indicate that it requires no conscious strategizing nor cheating on the part of researchers. Instead, it arises from the positive selection of methods and habits that lead to publication.
His hypothesis backs up what other critics of research have been saying for decades: that confidence in the scientific method has to be tempered by an awareness of human fallibility. In 2010 John Ioannides, a Greek academic working at Stanford analysed a number of significant medical research papers and concluded that “most research findings are false for most research designs and for most fields”. This became the most downloaded paper in the history of the journal, PLoS Medicine.
How does this happen?
One reason is that jobs in science are scarce and attractive. So scientists looking for positions or promotion tend to create studies with low statistical power but flashy results which make them stand out. Smaldino writes:
In the years between 1974 and 2014, the frequency of the words ‘innovative’, ‘groundbreaking’ and ‘novel’ in PubMed abstracts increased by 2500% or more. As it is unlikely that individual scientists have really become 25 times more innovative in the past 40 years, one can only conclude that this language evolution reflects a response to increasing pressures for novelty, and more generally to stand out from the crowd.
The consequences of this noxious dynamic are poorly designed studies, irreproducible results, and false positives. As the authors sardonically remark, “Science does not possess an ‘invisible hand’ mechanism through which the naked self-interest of individuals necessarily brings about a collectively optimal result.”
Here’s a very recent example. China’s drug regulator has just released a damning report which stated that more than 80 percent of clinical trial data for new drug registration in that country is fraudulent or sub-standard. China’s State Food and Drug Administration (CFDA) says that the clinical trials industry is “chaotic”. The CFDA thundered that “the finding of clinical trial chaos and fraudulent behaviour is startling. We will deal severely with any instances of fraudulent and deceptive behaviour in relation to drug applications and unceasingly pursue the responsible officials.”
If this is the case in science and medicine, the situation could be far worse in the social sciences where the data is harder to obtain, has low statistical power, and is more subject to interpretation. In fact, there have been some incredible scandals in the field of social psychology in recent years, in which it emerged that respected academics had just made up data.
Diederik Stapel, for instance, was a psychologist with a long list of publications and a stellar career. He had an eye for media-friendly research topics – meat eaters are more selfish than vegetarians, for example. But in 2011, whistleblowers alerted authorities at Tilburg University, in the Netherlands, about irregularities in his published papers. His reputation unravelled quickly. Stapel admitted that he had fiddled his data and fabricated research results and returned his PhD.
In a sombre report on the case, three committees found fundamental flaws in the scientific process both in the Netherlands and internationally.
“Virtually nothing of all the impossibilities, peculiarities and sloppiness mentioned in this report was observed by all these local, national and international members of the field, and no suspicion of fraud whatsoever arose… from the bottom to the top there was a general neglect of fundamental scientific standards and methodological requirements.”
Ioannides insists that the astonishing level of false results in scientific papers is not a reason to distrust the scientific method as such. Science is supposed to work by trial and error. Over time a consensus on the truth emerges from the debris of broken theories. Still, when headlines proclaim ground-breaking social science research about controversial ethical issues such as same-sex parenting or the mental health of transgender adolescents, it’s probably salutory to recall Dr Smaldino’s observation about academic studies: “practices that increase false discoveries remain not only common, but normative”.
Michael Cook is editor of MercatorNet.