More on this book
Community
Kindle Notes & Highlights
Read between
February 24 - March 13, 2025
Now that is scientific fact. There’s no real evidence for it, but it is scientific fact. Brass Eye
For a scientific finding to be worth taking seriously, it can’t be something that occurred because of random chance, or a glitch in the equipment, or because the scientist was cheating or dissembling. It has to have really happened. And if it did, then in principle I should be able to go out and find broadly the same results as yours. In many ways, that’s the essence of science, and something that sets it apart from other ways of knowing about the world: if it won’t replicate, then it’s hard to describe what you’ve done as scientific at all.
Published and true are not synonyms.
We might try to tell ourselves that there’s something unique about psychology as a discipline that caused its replication crisis. Psychologists have the unenviable job of trying to understand highly variable and highly complicated human beings, with all their different personalities and backgrounds and experiences and moods and quirks. The things they study, like thought, emotion, attention, ability, and perception, are usually intangible – difficult, if not impossible, to pin down in a lab experiment – and in social psychology, they have to study how all those complicated humans interact with
...more
In economics, a miserable 0.1 per cent of all articles published were attempted replications of prior results; in psychology, the number was better, but still nowhere near good, with an attempted replication rate of just over 1 per cent.
After citing research showing that a rather suspicious one hundred per cent of trials of acupuncture from scientists in China had positive results
An adopted hypothesis gives us lynx-eyes for everything that confirms it and makes us blind to everything that contradicts it. Arthur Schopenhauer, The World as Will and Presentation (1818)
‘Why do studies always find something rather than nothing?’
Research has quantified just how positive the scientific literature is: the meta-scientist Daniele Fanelli, in a 2010 study, searched through almost 2,500 papers from across all scientific disciplines, totting up how many reported a positive result for the first hypothesis they tested. Different fields of science had different positivity levels. The lowest rate, though still high, at 70.2 per cent, was space science; you may not be surprised to discover that the highest was psychology/psychiatry, with positive studies making up 91.5 per cent of publications.9 Reconciling this astounding
...more
journal editors and reviewers also make the decision to accept and publish papers according to how interesting the findings appear, not necessarily how meticulous the researchers have been in discovering them.
why bother submitting your null paper for publication if it has a negligible chance of being accepted?
Despite being one of the most commonly used statistics in science, the p-value has a notoriously tricky definition. A recent audit found that a stunning 89 per cent of a sample of introductory psychology textbooks got the definition wrong;
From the scientists in question, Franco and her colleagues learned that 65 per cent of studies with null results had never even been written up in the first place, let alone sent off to a journal. Many of those scientists predicted they’d have no chance of publication. ‘The unfortunate reality of the publishing world [is] that null effects do not tell a clear story,’
type of p-hacking is known as HARKing, or Hypothesising After the Results are Known. It’s nicely summed up by the oft-repeated analogy of the ‘Texas sharpshooter’, who takes his revolver and randomly riddles the side of a barn with gunshots, then swaggers over to paint a bullseye around the few bullet holes that happen to be near to one another, claiming that’s where he was aiming all along.50 Both kinds of p-hacking
The currency of positive, statistically significant results in science is so strong that many researchers forget that null results matter too. To know that a treatment doesn’t work, or that a disease isn’t related to some bio-marker, is useful information: it means we might want to spend our time and money elsewhere in future. If it’s properly designed, a study should be of interest whether it produces positive or null results.
The whole sorry tale is a textbook example of the perils of low statistical power. The initial candidate gene studies, being small-scale, could only see large effects – therefore, large effects were what they reported.
The vast majority of results from mice, somewhere around 90 per cent, don’t end up translating to human beings.20
incentives for teachers that reward school rankings rather than learning, leading to questionable marking;
In clinical trials, it’s been alleged that pharmaceutical companies and other drug researchers use tactical salami-slicing to take advantage of the fact that readers aren’t paying full attention to every publication. Split up your study into several publications and you’ll give the impression that there’s stronger support for the efficacy of your drug than if there were just one or two papers published on it. It’s a devious, but probably effective, tactic: busy doctors who see that there are six papers claiming support for one drug and only one paper for another might be more likely to
...more
To paraphrase the biologist Ottoline Leyser, the point of breaking ground is to begin to build something; if all you do is groundbreaking, you end up with a lot of holes in the ground but no buildings.
In the same way, there isn’t really such a thing as Open Science: there’s science, and then there’s an inscrutable, closed-off, unverifiable activity academics engage in where your only option is to have blind faith that they’re getting it right.
In spite of the perverse incentives, in spite of the publication system, in spite of academia and in spite of scientists, science does actually contain the tools to heal itself. It’s with more science that we can discover where our research has gone wrong and work out how to fix it. The ideals of the scientific process aren’t the problem: the problem is the betrayal of those ideals by the way we do research in practice. If we can only begin to align the practice with the values, we can regain any wavering trust – and stand back to marvel at all those wondrous discoveries with a clear
...more
The world is rightly proud of where science has brought us. To retain that pride, we owe it something far better than the product of our flawed human temperaments.