More on this book
Community
Kindle Notes & Highlights
by
Ben Goldacre
Read between
October 24 - October 27, 2025
In 1987, Thomas showed that simply giving a diagnosis—even a fake “placebo” diagnosis—improved patient outcomes.
His results suggest—albeit it in a very small sample—that a drug could be made to have the opposite effect from what you would predict from the pharmacology, simply by manipulating people’s expectations.
It’s been shown, for example, that the effects of a real drug in the body can sometimes be induced by the placebo “version,” not only in humans but also in animals.
Most drugs for Parkinson’s disease work by increasing dopamine release; patients receiving a placebo treatment for Parkinson’s disease, for example, showed extra dopamine release in the brain.
we find that animals’ immune systems can be conditioned to respond to placebos, in exactly the same way that Pavlov’s dog began to salivate in response to the sound of a bell.
People have tended to think, rather pejoratively, that if your pain responds to a placebo, that means it’s “all in the mind.”
It’s no good trying to exempt yourself, and pretend that this is about other people, because we all respond to the placebo.
Believing in things that have no evidence carries its own corrosive intellectual side effects, just as prescribing a pill in itself carries risks: it medicalizes problems, as we shall see, it can reinforce destructive beliefs about illness, and it can promote the idea that a pill is an appropriate response to a social problem, or a modest viral illness.
survey data shows that a disappointing experience with mainstream medicine is almost the only factor that regularly correlates with choosing alternative therapies.
And at the extreme, when they’re not undermining public health campaigns and leaving their patients exposed to fatal diseases, homeopaths who are not medically qualified can miss fatal diagnoses or actively disregard them,
Forty years ago a man called Austin Bradford Hill, the grandfather of modern medical research, who was key in discovering the link between smoking and lung cancer, wrote out a set of guidelines, a kind of tick list, for assessing causality and a relationship between an exposure and an outcome.
These are the cornerstone of evidence-based medicine, and often worth having at the back of your mind: it needs to be a strong association, which is consistent, and specific to the thing you are studying, where the putative cause comes before the supposed effect in time; ideally there should be a biological gradient, such as a dose-response effect; it should be consistent or at least not completely at odds with what is already known (because extraordinary claims require extraordinary evidence); and it should be biologically plausible.
There have been an estimated fifteen million medical academic articles published so far, and five thousand journals are published every month.
picking out what’s relevant—and what’s not—is a gargantuan task.
There are few opinions so absurd that you couldn’t find at least one person with a Ph.D. somewhere in the world to endorse them for you; and similarly, there are few propositions in medicine so ridiculous that you couldn’t conjure up some kind of published experimental evidence somewhere to support them,
If I had a T-shirt slogan for this whole book, it would be: “I think you’ll find it’s a bit more complicated than that.”
And it’s pretty dark in your bowels; in fact, if there’s any light in there at all, something’s gone badly wrong.
I’m bending over to be reasonable here—
I mean, I don’t sign my dead cat up to bogus professional organizations for the good of my health, you know.
there are some things that we know with a fair degree of certainty: there is reasonably convincing evidence that having a diet rich in fresh fruit and vegetables, with natural sources of dietary fiber, avoiding obesity, moderating one’s intake of alcohol, cutting out cigarettes, and taking physical exercise are protective against such things as cancer and heart disease.
the unjustified, unnecessary overcomplication of this basic dietary advice is, to my mind, one of the greatest crimes of the nutritionist movement.
This chapter did not appear in the original British edition of this book, because for fifteen months leading up to September 2008 the vitamin pill entrepreneur Matthias Rath was suing me personally, and The Guardian, for libel.
Rath dropped his case, which had cost in excess of $770,000 to defend.
He eventually paid $365,000, leaving The Guardian with a large shortfall.
I now know more about Matthias Rath than almost any other person alive. My notes, references, and witness statements, boxed up in the room where I am sitting right now, make a pile as tall as the man himself, and what I will write here is only a tiny fraction of the fuller story that is waiting to be told about him.
From the state of current knowledge, around 13 percent of all treatments have good evidence, and a further 21 percent are likely to be beneficial.
it turns out, depending on specialty, that between 50 and 80 percent of all medical activity is “evidence based.”
The U.S. pharmaceutical industry’s annual spend on promotion is more than three billion dollars, and it works, increasing prescriptions and doctor visits.
Big pharma is evil; I would agree with that premise. But because people don’t understand exactly how big pharma is evil, their anger gets diverted away from valid criticisms—its role in distorting data, for example, or withholding lifesaving AIDS drugs from the developing world—and channeled into infantile fantasies.
The golden age of medicine has creaked to a halt, as we have said, and the number of new drugs, or “new molecular entities,” being registered has dwindled from fifty a year in the 1990s to about twenty now.
the number of me-too drugs has risen, making up to half of all new drugs.
(anyone can report an adverse event to the FDA’s MedWatch system online).
drug trials are expensive, so an astonishing 90 percent of clinical drug trials, and 70 percent of trials reported in major medical journals, are conducted or commissioned by the pharmaceutical industry.
drug companies have a huge influence over what gets researched, how it is researched, how the results are reported, how they are analyzed, and how they are interpreted.
Different people respond differently to drugs: old people on lots of medications are often no-hopers, whereas younger people with just one problem are more likely to show an improvement.
So study your drug only in the latter group.
This is so commonplace it is hardly worth giving an example.
Next up, you could compare your drug against a useless control.
you’ve already spent hundreds of millions of dollars bringing your drug to market, so stuff that: do lots of placebo-controlled trials, and make a big fuss about them, because they practically guarantee some positive data.
this is universal,
If you do have to compare your drug with one produced by a competitor—to save face or because a regulator demands it—you could try a sneaky underhand trick: use an inadequate dose of the competing drug, so that patients on it don’t do very well, or give a very high dose of the competing drug, so that patients experience lots of side effects, or give the competing drug in the wrong way
another trick you could pull with side effects is simply not to ask about them,
If your drug is supposed to reduce cholesterol and so prevent cardiac deaths, for example, don’t measure cardiac deaths; measure reduced cholesterol instead. That’s much easier to achieve than a reduction in cardiac deaths, and the trial will be cheaper and quicker to do, so your result will be cheaper and more positive.
you could try an old trick: don’t draw attention to the disappointing data by putting it on a graph.
If your results are completely negative, don’t publish them at all, or publish them only after a long delay.
Or you could get really serious, and start to manipulate the statistics.
it is here for the doctors who bought the book to laugh at homeopaths.
If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written, and edited entirely by the industry).
if your finding is really embarrassing, hide it away somewhere, and cite “data on file.”

