More on this book
Community
Kindle Notes & Highlights
Many painkillers can cause gut problems—ulcers and more—and
the seductive march to medicalise everyday life; the fantasies about pills, mainstream and quack; and the ludicrous health claims about food, where journalists are every bit as guilty as nutritionists.
the people who run the media are humanities graduates with little understanding of science, who wear their ignorance as a badge of honour.
there is an attack implicit in all media coverage of science: in their choice of stories, and the way they cover them, the media create a parody of science.
science is portrayed as groundless, incomprehensible, didactic truth statements from scientists, who themselves are socially powerful, arbitrary, unelected authority figures. They are detached from reality; they do work that is either wacky or dangerous, but either way, everything in science is tenuous, contradictory, probably going to change soon and, most ridiculously, ‘hard to understand’. Having created this parody, the commentariat then attack it, as if they were genuinely critiquing what science is all about.
They are also there to make money, to promote products, and to fill pages cheaply, with a minimum of journalistic effort.
most news editors wouldn’t know a science story if it danced naked in front of them.
It’s quite understandable that newspapers should feel it’s their job to write about new stuff. But if an experimental result is newsworthy, it can often be for the same reasons that mean it is probably wrong: it must be new, and unexpected, it must change what we previously thought; which is to say, it must be a single, lone piece of information which contradicts a large amount of pre-existing experimental evidence.
There has been a lot of excellent work done, much of it by a Greek academic called John Ioannidis, demonstrating how and why a large amount of brand-new research with unexpected results will subsequently turn out to be false.
In the aggregate, these ‘breakthrough’ stories sell the idea that science—and indeed the whole empirical world view—is only about tenuous, new, hotly contested data and spectacular breakthroughs. This reinforces one of the key humanities graduates’ parodies of science: as well as being irrelevant boffinry, science is temporary, changeable, constantly revising itself, like a transient fad. Scientific findings, the argument goes, are therefore dismissible.
The biggest problem with science stories is that they routinely contain no scientific evidence at all. Why? Because papers think you won’t understand the ‘science bit’, so all stories involving science must be dumbed down, in a desperate bid to seduce and engage the ignorant, who are not interested in science anyway (perhaps because journalists think it is good for you, and so should be democratised).
If you are simply presented with the conclusions of a piece of research, without being told what was measured, how, and what was found—the evidence—then you are simply taking the researchers’ conclusions at face value, and being given no insight into the process.
where there is controversy about what the evidence shows, it reduces the discussion to a slanging match, because a claim such as ‘MMR causes autism’ (or not), is only critiqued in terms of the character of the person who is making the statement, rather than the evidence they are able to present.
The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know. Robert Pirsig, Zen and the Art of Motorcycle Maintenance
If the scientific method has any authority—or as I prefer to think of it, ‘value’—it is because it represents a systematic approach; but this is valuable only because the alternatives can be misleading. When we reason informally—call it intuition, if you like—we use rules of thumb which simplify problems for the sake of efficiency. Many of these shortcuts have been well characterised in a field called ‘heuristics’, and they are efficient ways of knowing in many circumstances.
When our cognitive system—our truth-checking apparatus—is fooled, then, much like seeing depth in a flat painting, we come to erroneous conclusions about abstract things. We might misidentify normal fluctuations as meaningful patterns, for example, or ascribe causality where in fact there is none.
to construct a broad understanding of the world from a memory of your own experiences would be like looking at the ceiling of the Sistine Chapel through a long, thin cardboard tube: you can try to remember the individual portions you’ve spotted here and there, but without a system and a model, you’re never going to appreciate the whole picture.
Simple regression is confused with causation, and this is perhaps quite natural for animals like humans, whose success in the world depends on our being able to spot causal relationships rapidly and intuitively: we are inherently oversensitive to them.
We see patterns where there is only random noise. 2. We see causal relationships where there are none.
This tendency is dangerous, because if you only ask questions that confirm your hypothesis, you will be more likely to elicit information that confirms it, giving a spurious sense of confirmation. It also means—thinking more broadly—that the people who pose the questions already have a head start in popular discourse.
We overvalue confirmatory information for any given hypothesis. 4. We seek out confirmatory information for any given hypothesis.
the subjects’ faith in research data was not predicated on an objective appraisal of the research methodology, but on whether the results validated their pre-existing views.
Our assessment of the quality of new evidence is biased by our previous beliefs.
Our attention is always drawn to the exceptional and the interesting, and if you have something to sell, it makes sense to guide people’s attention to the features you most want them to notice.
No matter what you do with statistics about risk or recovery, your numbers will always have inherently low psychological availability, unlike miracle cures, scare stories, and distressed parents.
our values are socially reinforced by conformity and by the company we keep. We are selectively exposed to information that revalidates our beliefs, partly because we expose ourselves to situations where those beliefs are apparently confirmed; partly because we ask questions that will—by their very nature, for the reasons described above—give validating answers; and partly because we selectively expose ourselves to people who validate our beliefs.
‘Communal reinforcement’ is the process by which a claim becomes a strong belief, through repeated assertion by members of a community. The process is independent of whether the claim has been properly researched, or is supported by empirical data significant enough to warrant belief by reasonable people.
Communal reinforcement goes a long way towards explaining how religious beliefs can be passed on in communities from generation to generation. It also explains how testimonials within communities of therapists, psychologists, celebrities, theologians, politicians, talk-show hosts, and so on, can supplant and become more powerful than scientific evidence.
When people learn no tools of judgement and merely follow their hopes, the seeds of political manipulati...
This highlight has been truncated due to consecutive passage length restrictions.
Most of us exhibit something called ‘attributional bias’: we believe our successes are due to our own internal faculties, and our failures are due to external factors; whereas for others, we believe their successes are due to luck, and their failures to their own flaws. We can’t all be right.
we use context and expectation to bias our appreciation of a situation—because, in fact, that’s the only way we can think.
We tend to assume, for example, that positive characteristics cluster: people who are attractive must also be good; people who seem kind might also be intelligent and well-informed. Even this has been demonstrated experimentally: identical essays in neat handwriting score higher than messy ones; and the behaviour of sporting teams which wear black is rated as more aggressive and unfair than teams which wear white.
for mathematical issues, or assessing causal relationships, intuitions are often completely wrong, because they rely on shortcuts which have arisen as handy ways to solve complex cognitive problems rapidly, but at a cost of inaccuracies, misfires and oversensitivity.
It’s not safe to let our intuitions and prejudices run unchecked and unexamined: it’s in our interest to challenge these flaws in intuitive reasoning wherever we can, and the methods of science and statistics grew up specifically in opposition to these flaws. Their thoughtful application is our best weapon against these pitfalls, and the challenge, perhaps, is to work out which tools to use where.
They are also much more likely to make the right decision when information about risk is presented as natural frequencies, rather than as probabilities or percentages.
I want to know who you’re talking about (e.g. men in their fifties); I want to know what the baseline risk is (e.g. four men out of a hundred will have a heart attack over ten years); and I want to know what the increase in risk is, as a natural frequency (two extra men out of that hundred will have a heart attack over ten years).
H.G. Wells said that statistical thinking would one day be as important as the ability to read and write in a modern technological society. I disagree; probabilistic reasoning is difficult for everyone, but everyone understands normal numbers.
What does ‘statistically significant’ mean? It’s just a way of expressing the likelihood that the result you got was attributable merely to chance. Sometimes you might throw ‘heads’ five times in a row, with a completely normal coin, especially if you kept tossing it for long enough. Imagine a jar of 980 blue marbles, and twenty red ones, all mixed up: every now and then—albeit rarely—picking blindfolded, you might pull out three red ones in a row, just by chance. The standard cut-off point for statistical significance is a p-value of 0.05, which is just another way of saying, ‘If I did this
...more
the one thing almost everyone knows about studies like this is that a bigger sample size means the results are probably more significant. But if they’re not independent data points, then you have to treat it, in some respects, like a smaller sample, so the results become less significant. As statisticians would say, you must ‘correct for clustering’.
Suddenly, when the background rate of an event is rare, even our previously brilliant blood test becomes a bit rubbish.
This breaks a cardinal rule of any research involving statistics: you cannot find your hypothesis in your results. Before you go to your data with your statistical tool, you have to have a specific hypothesis to test. If your hypothesis comes from analysing the data, then there is no sense in analysing the same data again to confirm it.
you do not just multiply p-values together, you weave them with a clever tool, like maybe ‘Fisher’s method for combination of independent p-values’.
‘Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self Assessments’, by Justin Kruger and David Dunning. They noted that people who are incompetent suffer a dual burden: not only are they incompetent, but they may also be too incompetent to assay their own incompetence, because the skills which underlie an ability to make a correct judgement are the same as the skills required to recognise a correct judgement.
people who performed particularly poorly relative to their peers were unaware of their own incompetence; but more than that, they were also less able to recognize competence in others, because this, too, relied on ‘meta-cognition’, or knowledge about the skill.
humanities graduates in the media, perhaps feeling intellectually offended by how hard they find the science, conclude that it must simply be arbitrary, made up nonsense, to everyone.
You can pick a result from anywhere you like, and if it suits your agenda, then that’s that: nobody can take it away from you with their clever words, because it’s all just game-playing, it just depends on who you ask, none of it really means anything, you don’t understand the long words, and therefore, crucially, probably, neither do the scientists.
because if the vaccine for hepatitis B, or MMR, or polio, is dangerous in one country, it should be equally dangerous everywhere on the planet; and if those concerns were genuinely grounded in the evidence, especially in an age of the rapid propagation of information, you would expect the concerns to be expressed by journalists everywhere. They’re not.
‘The true cost of something,’ as the Economist says, ‘is what you give up to get it.’
generalists, for the simple reason that they want stupid stories. Science is beyond their intellectual horizon, so they assume you can just make it up anyway.