More on this book
Community
Kindle Notes & Highlights
When the researchers increased light levels, they found that performance improved. But when they reduced the light levels, performance improved then, too. In fact, they found that no matter what they did, productivity increased anyway. This finding was very important: when you tell workers they are part of a special study to see what might improve productivity, and then you do something … they improve their productivity.
This is a kind of placebo effect, because the placebo is not about the mechanics of a sugar pill, it is about the cultural meaning of an intervention, which includes, amongst other things, your expectations, and the expectations of the people tending to you and measuring you.
To recap: GCSE results will get better anyway; Durham will be desperately trying to improve its GCSE results through other methods anyway; and any kids taking pills will improve their GCSE results anyway, because of the placebo effect and the Hawthorne effect.
on. You might sign up your kids calmly and cautiously, saying in a casual, offhand fashion that you’re doing a small informal study on some tablets, you don’t say what you expect to find, you hand them out without fanfare, and you calmly measure the results at the end. What they did in Durham was the polar opposite. There were camera crews, soundmen and lighting men flooding the classrooms.
The fish-oil story is by no means unique: repeatedly, in a bid to sell pills, people sell a wider explanatory framework, and as George Orwell first noted, the true genius in advertising is to sell you the solution and the problem.
In its most aggressive form, this process has been characterised as ‘disease-mongering’. It can be seen throughout the world of quack cures—and being alive to it can be like having the scales removed from your eyes—but in big pharma the story goes like this: the low-hanging fruit of medical research has all been harvested, and the industry is rapidly running out of novel molecular entities.
Because they cannot find new treatments for the diseases we already have, the pill companies instead invent new diseases for the treatments they already have. Recent favourites include Social Anxiety Disorder (a new use for SSRI drugs), Female Sexual Dysfunction (a new use for Viagra in women), night eating syndrome (SSRIs again) and so on: problems,
These crude biomedical mechanisms may well enhance the placebo benefits from pills, but they are also seductive precisely because of what they edit out. In the media coverage around the rebranding of Viagra as a treatment for women in the early noughties, and the invention of the new disease Female Sexual Dysfunction, for example, it wasn’t just the tablets that were being sold: it was the explanation.
Friends tell me that in some schools it is considered almost child neglect not to buy these capsules, and its impact on this generation of schoolchildren, reared on pills, will continue to bear rich fruit for all the industries, long after the fish-oil capsules have been forgotten.
Company memos described elaborate promotional schemes: planting articles on their research in the media, deploying researchers to make claims on their behalf, using radio phone-ins and the like.
example of that credulous extrapolation from preliminary laboratory data to clinical claim in real human beings that we have come to recognise as a hallmark of the ‘nutritionist’.
Many find it suspicious that black Africans seem to be the biggest victims of AIDS, and point to the biological warfare programmes set up by the apartheid governments; there have also been suspicions that the scientific discourse of HIV/AIDS might be a device, a Trojan horse for spreading even more exploitative Western political and economic agendas around a problem that is simply one of poverty.
We mustn’t appear insensitive to the Christian value system, but it seems to me that engaging sex workers is almost the cornerstone of any effective AIDS policy: commercial sex is frequently the ‘vector of transmission’, and sex workers a very high-risk population; but there are also more subtle issues at stake. If you secure the legal rights of prostitutes to be free from violence and discrimination, you empower them to demand universal condom use, and that way you can prevent HIV from being spread
around 13 per cent of all treatments have good evidence, and a further 21 per cent are likely to be beneficial. This sounds low, but it seems the more common treatments tend to have a better evidence base.
But the pharmaceutical industry is also currently in trouble. The golden age of medicine has creaked to a halt, as we have said, and the number of new drugs, or ‘new molecular entities’, being registered has dwindled from fifty a year in the 1990s to about twenty now.
Me-too drugs are an inevitable function of the market: they are rough copies of drugs that already exist, made by another company, but are different enough for a manufacturer to be able to claim their own patent. They take huge effort to produce, and need to be tested (on human participants, with all the attendant risks) and trialled and refined and marketed just like a new drug. Sometimes they offer modest benefits (a more convenient dosing regime, for example),
First of all, you need an idea for a drug. This can come from any number of places: a molecule in a plant; a receptor in the body that you think you can build a molecule to interface with; an old drug that you’ve tinkered with; and so on. This part of the story is extremely interesting, and I recommend doing a degree in it.
bringing a drug to market costs around $500 million in total. Then you do a Phase III trial, in hundreds or thousands of patients, randomised, blinded, comparing your drug against placebo or a comparable treatment, and collect much more data on efficacy and safety. You might need to do a few of these, and then you can apply for a licence to sell your drug. After it goes to market, you should be doing more trials, and other people will probably do trials and other studies on your drug too; and hopefully everyone will keep their eyes open for any previously unnoticed side-effects, ideally
...more
Well, firstly, you could study it in winners. Different people respond differently to drugs: old people on lots of medications are often no-hopers, whereas younger people with just one problem are more likely to show an improvement. So only study your drug in the latter group. This will make your research much less applicable to the actual people that doctors are prescribing for, but hopefully they won’t notice.
Next up, you could compare your drug against a useless control. Many people would argue, for example, that you should never compare your drug against placebo, because it proves nothing of clinical value: in the real world, nobody cares if your drug is better than a sugar pill; they only care if it is better than the best currently available treatment. But you’ve already spent hundreds of millions of dollars bringing your drug to market, so stuff that: do lots of placebo-controlled trials and make a big fuss about them, because they practically guarantee some positive data.
If you do have to compare your drug with one produced by a competitor—to save face, or because a regulator demands it—you could try a sneaky underhand trick: use an inadequate dose of the competing drug, so that patients on it don’t do very well; or give a very high dose of the competing drug, so that patients experience lots of side-effects; or give the competing drug in the wrong way (perhaps orally when it should be intravenous, and hope most readers don’t notice); or you could increase the dose of the competing drug much too quickly, so that the patients taking it get worse side-effects.
...more
example, or a careful and detailed enquiry. One 3,000-subject review on SSRIs simply did not list any sexual side-effects on its twenty-three-item side-effect table. Twenty-three other things were more important, according to the researchers, than losing the sensation of orgasm. I have read them. They are not.
And here is a good trick: instead of a real-world outcome, like death or pain, you could always use a ‘surrogate outcome’, which is easier to attain.
If your drug is supposed to reduce cholesterol and so prevent cardiac deaths, for example, don’t measure cardiac deaths, measure reduced cholesterol instead. That’s much easier to achieve than a reduction in cardiac deaths, and the trial will be cheaper and qui...
This highlight has been truncated due to consecutive passage length restrictions.
Ignore the protocol entirely Always assume that any correlation proves causation. Throw all your data into a spreadsheet programme and report—as significant—any relationship between anything and everything if it helps your case. If you measure enough, some things are bound to be positive just by sheer luck.
Play with the baseline Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, then leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, then adjust for the baseline in your analysis.
Ignore dropouts People who drop out of trials are statistically much more likely to have done badly, and much more likely to have had side-effects. They will only make your drug look bad. So ignore them, make no attempt t...
This highlight has been truncated due to consecutive passage length restrictions.
Clean up the data Look at your graphs. There will be some anomalous ‘outliers’, or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look g...
This highlight has been truncated due to consecutive passage length restrictions.
‘The best of five … no … seven … no … nine!’ If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months...
This highlight has been truncated due to consecutive passage length restrictions.
Try every button on the computer If you’re really desperate, and analysing your data the way you planned does not give you the result you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.
And when you’re finished, the most important thing, of course, is to publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written and edited entirely by the industry): remember, the tricks we have just described hide nothing, and will be obvious to anyone who reads your paper, but only if they read it very attentively, so it’s in your interest to make sure it isn’t read beyond the abstract.
much bad research comes down to incompetence.
Many of the methodological errors described above can come about by wishful thinking, as much as mendacity. But is it possible to prove foul play?
On an individual level, it is sometimes quite hard to show that a trial has been deliberately rigged to give the right answer for its sponsors. Overall, ...
This highlight has been truncated due to consecutive passage length restrictions.
pharmaceutical company were found to be four times more likely to give results that were favourable to the company than independent studies.
‘transitivity’: if A is better than B, and B is better than C, then C cannot be better than A. To put it bluntly, this review of fifty-six trials exposed a singular absurdity: all of these drugs were better than each other.
different journal you might have to re-format the references (hours of tedium). If you aim too high and get a few rejections, it could be years until your paper comes out, even if you are being diligent: that’s years of people not knowing about your study.
but you would expect them all to cluster fairly equally around the true answer. You would also expect that the bigger studies, with more participants in them, and with better methods, would be more closely clustered around the correct answer than the smaller studies: the smaller studies, meanwhile, will be all over the shop, unusually positive and negative at random,
The smaller, more rubbish negative trials seem to be missing, because they were ignored—nobody had anything to lose by letting these tiny, unimpressive trials sit in their bottom drawer—and so only the positive ones were published.
Drug companies can go one better than neglecting negative studies. Sometimes, when they get positive results, instead of just publishing them once, they publish them several times, in different places, in different forms, so that it looks as if there are lots of different positive trials.
Crucially, data which showed the drug in a better light were more likely to be duplicated than the data which showed it to be less impressive, and overall this led to a 23 per cent overestimate of the drug’s efficacy.
anti-arrhythmic drugs were causing comparable numbers of deaths to the total number of Americans who died in the Vietnam war. Information that could have helped to avert this disaster was sitting, tragically, in a bottom drawer, as a researcher later explained:
When we carried out our study in 1980 we thought that the increased death rate … was an effect of chance… The development of [the drug] was abandoned for commercial reasons, and so this study was therefore never published; it is now a good example of ‘publication bias’. The results described here … might have provided an early warning of trouble ahead.
deliberately downplayed or, worse than that, simply not published.
The drug company, Apotex, threatened Olivieri, repeatedly and in writing, that if she published her findings and concerns they would take legal action against her. With great courage—and, shamefully, without the support of her university—Olivieri presented her findings at several scientific meetings and in academic journals. She believed she had a duty to disclose her concerns, regardless of the personal consequences. It should never have been necessary for her to need to make that decision.
You’re a drug company. Before you even start your study, you publish the ‘protocol’ for it, the methods section of the paper, somewhere public. This means that everyone can see what you’re going to do in your trial, what you’re going to measure, how, in how many people, and so on, before you start.
Before 1935 doctors were basically useless. We had morphine for pain relief—a drug with superficial charm, at least—and we could do operations fairly cleanly, although with huge doses of anaesthetics, because we hadn’t yet sorted out well-targeted muscle-relaxant drugs.
There is a danger with authority-figure coverage, in the absence of real evidence, because it leaves the field wide open for questionable authority figures to waltz
We see patterns where there is only random noise. 2. We see causal relationships where there are none.
We overvalue confirmatory information for any given hypothesis. 4. We seek out confirmatory information for any given hypothesis.