Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science
Rate it:
4%
Flag icon
if it won’t replicate, then it’s hard to describe what you’ve done as scientific at all.
Professor Chris Lloyd
Not for the grievance studies hoax. Gender and cultural studies research do not involve experiment only theoretical frameworks.
4%
Flag icon
Science is inherently a social thing, where you have to convince other people – other scientists – of what you’ve found.
Professor Chris Lloyd
It really shouldn't be.
4%
Flag icon
encouraging researchers to obsess about prestige, fame, funding and reputation
Professor Chris Lloyd
Lucky or me i never cared.
5%
Flag icon
How many times have politicians made laws or policies that directly impact people’s lives, citing science that won’t stand up to scrutiny? In
Professor Chris Lloyd
Conversion Prohibition bill!
5%
Flag icon
Before that statement makes you toss the book across the room, let me explain what I mean.
Professor Chris Lloyd
Indeed i almost did.
8%
Flag icon
‘The experimental evidence for the ideas I presented in that chapter was significantly weaker than I believed when I wrote it,’ he commented six years after the publication of Thinking, Fast and Slow. ‘This was simply an error: I knew all I needed to know to moderate my enthusiasm … but I did not think it through.’14 But the damage had already been done: millions of people had been informed by a Nobel Laureate that they had ‘no choice’ but to believe in those studies.
Professor Chris Lloyd
Priming studies all failed to replicate.
8%
Flag icon
2015, made bitter reading: in the end, only 39 per cent of the studies were judged to have replicated successfully.
Professor Chris Lloyd
Remember this for debates
9%
Flag icon
Almost all of the replications, even where successful, found that the original studies had exaggerated the size of their effects. Overall, the replication crisis seems, with a snap of its fingers, to have wiped about half of all psychology research off the map.29
9%
Flag icon
In macroeconomics (research on, for example, tax policies and how they affect countries’ economic growth), a re-analysis of sixty-seven studies could only reproduce the results from twenty-two of them using the same datasets, and the level of success improved only modestly after the researchers appealed to the original authors for help.41
13%
Flag icon
the system is largely built on trust: everyone basically assumes ethical behaviour on the part of everyone else.
16%
Flag icon
India and China were overrepresented in the number of papers with duplicated images,
16%
Flag icon
while the US, the UK, Germany, Japan and Australia were underrepresented. The authors proposed that these differences were cultural: the looser rules and softer punishments for scientific misconduct in countries like India and China might be responsible for their producing a higher quantity of potentially fraudulent research.
16%
Flag icon
one hundred per cent of trials of acupuncture from scientists in China had positive results
16%
Flag icon
‘Injecting falsehoods into the body of science is rarely, if ever, the purpose of those who perpetrate fraud,’ he suggests. ‘They almost always believe that they are injecting a truth into the scientific record … but without going through all the trouble that the real scientific method demands.’
17%
Flag icon
scientific fraudsters do grievous and disproportionate damage to science and thereby to one of our most precious human institutions.
Professor Chris Lloyd
Like SJW fraudsters in the grievance studies departments.
17%
Flag icon
in general, scientists are open-minded and trusting. The norm for peer reviewers is to be sceptical of how results are interpreted, but the thought that the data are fake usually couldn’t be further from their minds.
17%
Flag icon
An analysis of several other retracted papers found that 83 per cent of post-retraction citations were positive and didn’t mention the retraction – these zombie papers were still shambling around the scientific literature, with hardly anyone noticing they were dead.118
18%
Flag icon
Not only do the most glamorous journals encourage people to send only the flashiest findings – more or less guaranteeing that some small fraction of scientists will turn to deception to achieve such flashiness – but journal editors often act with reluctance and recalcitrance when even quite solid evidence of wrongdoing comes to light.
24%
Flag icon
As the data scientists Tal Yarkoni and Jake Westfall explain, ‘The more flexible a[n] … investigator is willing to be – that is, the wider the range of patterns they are willing to ‘see’ in the data – the greater the risk of hallucinating a pattern that is not there at all.’71
24%
Flag icon
‘In a head-to-head competition between papers … the paper with results that are all significant and consistent will be preferred over the equally well-conducted paper that reports the outcome, warts and all, to reach a more qualified conclusion.’
24%
Flag icon
In most cases, the students probably thought that by nixing such results they were letting their data more clearly ‘tell a story’ – and they were likely being taught by senior researchers that this was the right thing to do to persuade peer reviewers that a study should be published.76 In reality, they were leaving future scientists with a hopelessly biased picture of what went on in the research.
24%
Flag icon
From 2005, the International Committee of Medical Journal Editors, recognising the massive problem of publication bias, ruled that all human clinical trials should be publicly registered before they take place – otherwise they wouldn’t be allowed to be published in most top medical journals.
Professor Chris Lloyd
But they still published papers that changed endpoint see next.
25%
Flag icon
Of sixty-seven trials, only nine reported everything they said they would. Across all the papers, there were 354 outcomes that simply disappeared between registration and publication (it’s safe to assume that most of these had p-values over 0.05), while 357 unregistered outcomes appeared in the journal papers ex nihilo.
25%
Flag icon
Yet other distorting forces also exert their pull. The first one that comes to mind is money. In the US, where numbers are easily available, just over a third of registered medical trials in recent years were funded by the pharmaceutical industry.
25%
Flag icon
null results matter too. To know that a treatment doesn’t work, or that a disease isn’t related to some bio-marker, is useful information:
25%
Flag icon
The dissenting researchers described how proponents of the amyloid hypothesis, many of whom are powerful, well-established professors, act as a ‘cabal’, shooting down papers that question the hypothesis with nasty peer reviews and torpedoing the attempts of heterodox researchers to get funding and tenure.
26%
Flag icon
My own field, psychology, is no stranger to scientists who identify as left-wing. The skew this way in psychology is very large indeed: surveys in the United States have found the ratio of liberals to conservatives to be around 10:1.
26%
Flag icon
Critics of the liberal bias in psychology have turned their fire, for instance, on the idea of stereotype threat.100 It’s the idea that girls’ mathematics test performance suffers when they’re reminded of the stereotype that ‘boys are better at maths’.
26%
Flag icon
The evidence for the phenomenon is quite weak, and possibly subject to publication
26%
Flag icon
bias, for a 2015 meta-analysis reviewing all the relevant stereotype threat studies found a clear gap where the small, null studies on the subject – those that showed girls were equally good at maths with and without the stereotypes being mentioned – should have been
26%
Flag icon
The Mismeasure of Man, Gould had freely admitted to having a strong commitment to social justice and liberal politics.111 The Lewis paper concluded that ‘ironically, Gould’s own analysis of Morton is likely the stronger example of a bias influencing results’.
29%
Flag icon
This isn’t just a case of an unfortunate error, but a decades-long denial of a serious failing.
31%
Flag icon
Their propensity to mislead means that low-powered studies actively subtract from our knowledge: it would often have been better never to have done them in the first place. Scientists who knowingly run low-powered research, and the reviewers and editors who wave through tiny studies for publication, are introducing a subtle poison into the scientific literature, weakening the evidence that it needs to progress. And
Professor Chris Lloyd
This is only true in combination with publication bias. If all studies were published each would add to the total metanalysis evidence base.
34%
Flag icon
that are of ‘unusual significance’ (Cell) or ‘exceptional importance’ (Proceedings of the National Academy of Sciences).59 Conspicuous by their absence from this list are any words about rigour or replicability – though hats off to the New England Journal of Medicine, the world’s top medical journal, for stating that it’s looking for ‘scientific accuracy, novelty, and importance’, in that order.
38%
Flag icon
people have a natural tendency to compete intensely for status and credit, to collect reputation-burnishing achievements, and to work towards even objectively meaningless targets
38%
Flag icon
for the more ambitious and competitive amongst us, a long CV can become its own reward. For some, simply getting one’s name on a scientific paper, any scientific paper, feels like a meaningful accomplishment.
39%
Flag icon
Pushing peer reviewers (who are, of course, busy scientists themselves) to review more
39%
Flag icon
and more submissions means that more research that’s mistaken, overhyped or even fraudulent will get past the filter. It can’t be surprising, in both cases, if standards slip.
39%
Flag icon
Neither salami-slicing nor publishing in predatory journals is strictly against any rules and it’s as hard to define what counts as salami-slicing as it is to categorise all journals into ‘predatory’ and legitimate.
40%
Flag icon
The trifecta of salami-slicing, dodgy journals and peer-review fraud
41%
Flag icon
When we look at the overall trends in scientific practice in recent decades – the exponential proliferation of papers; the strong academic selection on publications, citations, h-indices and grants; the obsession with impact factors and with new, exciting results; and the appearance of phenomena like predatory journals, which are of course just catering to a demand – wouldn’t it be strange if we didn’t see such bad behaviour on the part of scientists?
41%
Flag icon
I compared the process of the lengthening of CVs required to get academic jobs to sexual selection, where increasingly flamboyant displays evolve to attract mates.
41%
Flag icon
the system selects against researchers who have strong convictions about getting it right, filling their places instead with those who are happier to bend the rules.
41%
Flag icon
perversely, false-positive results are just as publishable as true positives, but easier to come by.
42%
Flag icon
in 2019 the Swedish government passed a law that stopped universities from investigating cases of research misconduct themselves, instead handing the responsibility to a new, independent government agency.
43%
Flag icon
the point of breaking ground is to begin to build something; if all you do is groundbreaking, you end up with a lot of holes in the ground but no buildings.
43%
Flag icon
the editorial board ‘acknowledges the significance of replication in building a cumulative knowledge base in our field. We therefore encourage submissions that attempt to replicate important findings, especially research previously published in the Journal of Personality and Social Psychology.’
43%
Flag icon
over 1,000 journals recently adopted a set of guidelines explicitly announcing, among other things, that replication studies are welcome.
45%
Flag icon
registered study, impromptu analyses to explore interesting patterns in the data are still allowed, they just can’t be spun to look as if they’d been planned in advance. These so-called exploratory analyses
45%
Flag icon
Yet somewhat scandalously, the majority of science frames exploratory results as though they were confirmatory; as though they were the results of tests planned before the study started. Pre-registration lets you be clear with your readers whether you were using the data in an exploratory way, to generate hypotheses
« Prev 1