More on this book
Community
Kindle Notes & Highlights
Read between
July 16, 2022 - January 24, 2023
And because scientists are human beings, the ways that they try to persuade each other aren’t always fully rational or objective.4 If we don’t take great care, our scientific process can become permeated by very human flaws.
The world of functional brain-imaging was also rocked by a paper which revealed that a default setting in a software package commonly used to analyse imaging data had a statistical error. It led to a vast number of accidental, uncorrected false-positive results, and it might have compromised around 10 per cent of all studies that had ever been published on the topic.
Peanut allergies can be deadly, and if a parent has one, their children are at higher risk of developing one too. For many years, the guidelines for at-risk babies, based on previous research, were to avoid giving them peanuts until they were at least three years old and for breastfeeding mothers to avoid peanuts as well. It turns out this advice was exactly backwards: a high-quality randomised trial in 2015 showed that only around 2 per cent of at-risk children who ate peanuts early in life developed an allergy to them by age five, compared to the almost 14 per cent of at-risk children who
...more
The extent of the uncertainty in medical science can fully be appreciated if we take a look across the whole literature. One way of doing so is to consult all the many comprehensive reviews published by the Cochrane Collaboration, a highly reputable charity that systematically assesses the quality of medical treatments. Of those, a startling 45 per cent conclude that there’s insufficient evidence to decide whether the treatment in question works or not.66 How many patients have had their hopes raised, have suffered, or have even died because their doctors have used worthless or harmful
...more
At the risk of sounding tautological: since underpowered studies only have the power to detect large effects, those are the only effects they see. This is where the logic leads. If you find an effect in an underpowered study, that effect is probably exaggerated.
These are the long-term domino effects of underpowered research: study after study wastes time, effort and resources chasing an effect that’s like the giant shadow projected by a moth sitting on a lightbulb.
To paraphrase the biologist Ottoline Leyser, the point of breaking ground is to begin to build something; if all you do is groundbreaking, you end up with a lot of holes in the ground but no buildings.
Here’s another way to deal with statistical bias and p-hacking: take the analysis completely out of the researchers’ hands. In this scenario, upon collecting their data, scientists would hand them over for analysis to independent statisticians or other experts, who would presumably be mostly free of the specific biases and desires of those who designed and performed the experiment.33
The wider relevance is obvious: instead of running just one single analysis, which might suit your individual bias, we instead should take a much broader view of statistics, looking at all the counterfactuals, and asking ourselves what might have happened if we’d decided to run things slightly differently.
Journals don’t just have the role of disseminating research to the world – they make the decisions on what gets published. What if we were to separate these two roles entirely?73 One radical proposal goes like this: after completing a study, scientists write a preprint and upload it to an online repository. They then request that it be reviewed by a review service: a new kind of organisation that recruits peer reviewers in the usual way but that is separate from any scientific journal.74 The service reviews the paper and gives it a grade. The authors, if they wish, can then go back and revise
...more
And as we discussed in the previous chapter, the system creates a selection pressure where the only academics who survive are the ones who are naturally good at playing the game.