More on this book
Community
Kindle Notes & Highlights
Read between
February 23 - March 19, 2021
But look what happened after 2000, when pre-registration was brought in. Suddenly there were only a couple of positive results, with the rest of the studies reporting null effects, clustering around zero. The success rate for trials before registration was required was 57 per cent; afterwards, it plummeted to 8 per cent.
before the advent of trial registration, the research appeared hugely successful,
We mustn’t make a correlation-causation error by assuming that the new registration requirements necessarily caused the decrease in positive findings – maybe other things changed in that same year, like a shift in focus to different kinds of treatments. But it’s plausible to think that the public pre-registration of their plans might have led the clinical triallists to be more transparent and honest about what they found.
‘Registered Report’, kill publication bias stone dead, by removing the pernicious link between the statistical significance of the results and the decision to publish,
Best of all, it nullifies many of the perverse incentives that lead to bias and fraud in the first place. You know you’re going to get a publication in any case, so there’s no longer so much pressure to beautify your findings.
These large-scale projects can directly address the replicability of their respective fields and because the results are being shared around a larger community of usually very opinionated scientists, they can
also, in theory, act as a check on the biases of any individual researcher.
Open Access movement. Its impact is already obvious: many journals have begun allowing scientists to pay a fee upon publication that lets their paper be downloaded by anyone, in perpetuity, at no cost.
Science Europe, the body that represents the funding agencies of European governments. The idea is that by 2021, all research funded by its members will be published in fully Open Access journals.
The rules stipulate that scientists who get Science Europe grants won’t even be able to submit research to journals that aren’t fully Open Access – and that includes Science, Nature,
gathering and collating their feedback, then editing, checking, printing and distributing the paper journal itself. It’s understandable that publishers would charge for such a service. But with the advent of email and online journals, the whole process is vastly simplified. In the age of online publication, when for-profit publishers charge exorbitant subscription fees for access to their journals, what exactly are we paying for?
National Academy of Science subscription cost the university $0.04. The Elsevier per-paper fee was $1.06. More than twenty-six times more expensive. Researchers from across the world are pouring millions into Elsevier’s coffers, and indeed billions into those of for-profit publishers in general. What additional service are they getting for all this money?
The time and expertise of peer reviewers is the crux of the whole scientific system,
publishers are engaging in what economists call rent-seeking:
‘the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product’.
the irrational system of prestige that’s built up around the journal publishing system represents a market failure,
impeding the necessary competition that could drive down its costs. It’s hard to argue that journal publishers are adding much value to the papers: they often serve merely as a conduit between the authors and the volunteer peer reviewers,
Preprints have already quickened the pace and improved the openness of scientific research – and, one hopes, reduced the need to put ‘failed’ studies in the file drawer,
Journal editors, whose role in this system is more like curators than gatekeepers, can peruse all the graded preprints and choose those that they would like to include in their journal.
With this model, everything gets published in preprint form, and journals become amplifiers for the best or most relevant research
The journals that would be worth subscribing to would be the ones that provided the best additional value, sifting through the thousands of new preprints
scientists would want to have their work picked up by the most renowned curator-editors and would compete based on where their work is published. But the decision about whether to publish it would already have been made.
Witness, for example, how many universities make their scientists sign an agreement about the number of papers they’ll publish, and the impact factors of the journals they’ll be in, as a step to getting tenure.
scientists who already have tenure, and who already run well-funded labs, continue regularly to engage in the kinds of bad practices described in this book.
Years of implicit and explicit training to chase publications and citations at any cost leave their mark
only academics who survive are the ones who are naturally good at playing the game.
If you’re the type whose insatiable ego drives you to do anything to beat your colleagues’ h-index, the academic system is currently welcoming you with open arms.
The three main players responsible for this are the universities, the journals and the funders:
An analysis in 2016 found that the scores given to potential US National Institutes of Health grants by reviewers had almost no correlation with the quality of the eventual research produced from the grant (measured by the number of citations it received).
‘the value of the science that researchers forgo while preparing [grant] proposals can approach or exceed the value of the science that the funding program supports’.
Henryk Górecki’s Third Symphony, the Symphony of Sorrowful Songs,
a system that encourages outrageous hyping of results, jealous guarding of data, lazy corner-cutting, brazen prestige-hunting and shameful fraud.
Openness. Transparency. Improved statistics. Pre-registration. Automated error-checking. Clever ways to catch fraudsters.
Preprints. Better hiring practices. A new culture of humility. Etc. Etc. What if the replication crisis is a big hoax and we create a better science for nothing?
O, while you live, tell truth, and shame the Devil!
William Shakespeare, Henry IV, 1.3.1.59
is really no such thing as alternative medicine, just medicine that works and medicine that doesn’t’.17 In the same way, there isn’t really such a thing as Open Science: there’s science, and then there’s an inscrutable, closed-off, unverifiable activity academics engage in where your only option is to have blind faith that they’re getting it right.
The editor of the Journal of Environmental Quality was quoted in the Washington Post arguing that a published paper is ‘the end product to your research … It is now finalised. There’s nothing preliminary about it’.20 This is a naïve statement, informed by the kind of idealised, prettified view of science we’ve seen and dismissed throughout this book.
a frank admission of science’s weaknesses is the best way to pre-empt attacks by science’s critics and to be honest more generally
Andrew D. Higginson & Marcus R. Munafò, ‘Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions’, PLOS Biology 14, no. 11 (10 Nov. 2016): p.6; https://doi.org/10.1371/journal.pbio.2000995
As long as you know the effect size and the p-value, you can derive the confidence intervals and vice versa. See D. G. Altman & J. M. Bland, ‘How to Obtain the Confidence Interval from a P Value’, BMJ 343 (16 July 2011): d2090; https://doi.org/10.1136/bmj.d2090 and D. G. Altman & J. M. Bland, ‘How to Obtain the P Value from a Confidence Interval’, BMJ 343 (8 Aug. 2011): d2304; https://doi.org/10.1136/bmj.d2304
Holly Else, ‘Radical Open-Access Plan Could Spell End to Journal Subscriptions’, Nature 561, no. 7721 (Sept. 2018): pp. 17–18; https://doi.org/10.1038/d41586-018-06178-7. I should note that not everyone is instinctually a fan of Open Access. Probably the best and most detailed discussion of the pros and cons of Open Access is the one by the artificial intelligence researcher Daniel Allington: Daniel Allington, ‘On Open Access, and Why It’s Not the Answer’, Daniel Allington, 15 Oct. 2013; http://www.danielallington.net/2013/10/open-access-why-not-answer/
Ferris C. Fang et al., ‘NIH Peer Review Percentile Scores Are Poorly Predictive of Grant Productivity’, eLife 5 (16 Feb. 2016): e13323; https://doi.org/10.7554/eLife.13323
Kevin Gross & Carl T. Bergstrom, ‘Contest Models Highlight Inherent Inefficiencies of Scientific Funding Competitions’, ed. John P. A. Ioannidis, PLOS Biology 17, no. 1 (2 Jan. 2019): p.1, e3000065; https://doi.org/10.1371/journal.pbio.3000065
Ian Sample, ‘Top Geneticist “Should Resign” Over His Team’s Laboratory Fraud’, Guardian, 1 Feb. 2020; https://www.theguardian.com/education/2020/feb/01/david-latchman-geneticist-should-resign-over-his-team-science-fraud
Kevin Arceneaux et al., ‘We Tried to Publish a Replication of a Science Paper in Science. The Journal Refused’, Slate, 20 June 2019; https://slate.com/technology/2019/06/science-replication-conservatives-liberals-reacting-to-threats.html. Their replication was eventually published as Bert N. Bakker et al., ‘Conservatives and Liberals Have Similar Physiological Responses to Threats’, Nature Human Behaviour (10 Feb. 2020); https://doi.org/10.1038/s41562-020-0823-z
Better DNA: The Nonsense Science of Modi’s India’, South China Morning Post, 13 Jan. 2019; https://www.scmp.com/week-asia/society/article/2181752/bowel-cleanse-better-dna-nonsense-science-modis-india