More on this book
Community
Kindle Notes & Highlights
Read between
March 12 - June 25, 2022
Let’s work to resist our neophiliac, magpie-like focus on shiny research findings, and instead learn to value results that are solid, even if they’re less immediately thrilling. In other words, let’s Make Science Boring Again.
The trick will be to balance our new, step-by-step attitude towards knowledge with an appreciation that sometimes moonshot research by eccentric, Boulez-style characters really can have enormous payoffs.
If you’re convinced that science is an implacable wall of truth that you have no choice but to believe in, what do you do when it becomes clear that something has gone wrong? After all, if we’ve learned one thing from this book, science will, quite often, go wrong.
Every time we allow a flawed or obviously biased study to be published; every time we write another boy-who-cried-wolf press release that can’t be backed up by the data; every time a scientist writes a popular book full of feel-good-but-flimsy advice, we hand science’s critics another round of ammunition. Fix the science, I’d suggest, and the trust will follow.
The world is rightly proud of where science has brought us. To retain that pride, we owe it something far better than the product of our flawed human temperaments. We owe it the truth.
Franco et al., ‘Publication Bias in the Social Sciences: Unlocking the File Drawer’, Science 345, no. 6203 (19 Sept. 2014): pp. 1502–5; https://doi.org/10.1126/science.1255484
Leslie K. John et al., ‘Measuring the Prevalence of Questionable Research Practices with Incentives for Truth Telling’, Psychological Science 23, no. 5 (May 2012): pp. 524–32; https://doi.org/10.1177/0956797611430953.
R. Silberzahn et al., ‘Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results’, Advances in Methods and Practices in Psychological Science
Tal Yarkoni & Jacob Westfall, ‘Choosing Prediction Over Explanation in Psychology: Lessons from Machine Learning’, Perspectives on Psychological Science 12, no. 6 (Nov. 2017): pp. 1100–1122, p. 1104; https://doi.org/10.1177/1745691617693393
Goldacre’s team also tried to get letters published in the journals that showed that the trials hadn’t accurately reported their findings. Most editors weren’t interested. Ben Goldacre, ‘Make Journals Report Clinical Trials Properly’, Nature 530, no. 7588 (Feb. 2016): p. 7; https://doi.org/10.1038/530007a
Michèle B. Nuijten, ‘statcheck – a Spellchecker for Statistics’, LSE Impact of Social Sciences, 28 Feb. 2018; https://blogs.lse.ac.uk/impactofsocialsciences/2018/02/28/statcheck-a-spellchecker-for-statistics/.
Nicholas J. L. Brown & James A. J. Heathers, ‘The GRIM Test: A Simple Technique Detects Numerous Anomalies in the Reporting of Results in Psychology’, Social Psychological and Personality Science 8, no. 4 (May 2017): pp. 363–69; https://doi.org/10.1177/1948550616673876
J. B. Carlisle, ‘Data Fabrication and Other Reasons for Non-Random Sampling in 5087 Randomised, Controlled Trials in Anaesthetic and General Medical Journals’, Anaesthesia 72, no. 8 (Aug. 2017): pp. 944–52; https://doi.org/10.1111/anae.13938.
Serge P. J. M. Horbach & Willem Halffman, ‘The Ghosts of HeLa: How Cell Line Misidentification Contaminates the Scientific Literature’, PLOS ONE 12, no. 10 (12 Oct. 2017): e0186281; https://doi.org/10.1371/journal.pone.0186281.
Joseph P. Simmons et al., ‘Life after P-Hacking: Meeting of the Society for Personality and Social Psychology’, SSRN, (New Orleans, LA, 17–19 Jan. 2013); https://doi.org/10.2139/ssrn.2205186
P. S. Sumner et al., ‘The Association between Exaggeration in Health-Related Science News and Academic Press Releases: Retrospective Observational Study’, BMJ 349, (9 Dec. 2014): g7015; https://doi.org/10.1136/bmj.g7015
Rachel C. Adams et al., ‘Claims of Causality in Health News: A Randomised Trial’, BMC Medicine 17, no. 1 (Dec. 2019): 91; https://doi.org/10.1186/s12916-019-1324-7 27. Isabelle Boutron et al., ‘Three Randomized Controlled Trials Evaluating the Impact of “Spin” in Health News Stories Reporting Studies of Pharmacologic Treatments on Patients’/Caregivers’ Interpretation of Treatment Benefit’, BMC Medicine 17, no. 1 (Dec. 2019): 105; https://doi.org/10.1186/s12916-019-1330-9
Estelle Dumas-Mallet et al., ‘Poor Replication Validity of Biomedical Association Studies Reported by Newspapers’, PLOS ONE 12, no. 2 (21 Feb. 2017): e0172650; https://doi.org/10.1371/journal.pone.0172650
New England Journal of Medicine: https://www.nejm.org/about-nejm/about-nejm 61. Isabelle Boutron, ‘Reporting and Interpretation of Randomized Controlled Trials with Statistically Nonsignificant Results for Primary Outcomes’, JAMA 303, no. 20 (26 May 2010): pp. 2058–64; https://doi.org/10.1001/jama.2010.651.
‘Misrepresentation and Distortion of Research in Biomedical Literature’, Proceedings of the National Academy of Sciences 115, no. 11 (13 Mar. 2018): pp. 2613–19; https://doi.org/10.1073/pnas.1710755115
Y. A. de Vries et al., ‘The Cumulative Effect of Reporting and Citation Biases on the Apparent Efficacy of Treatments: The Case of Depression’, Psychological Medicine 48, no. 15 (Nov. 2018): pp. 2453
One such program is RMarkdown: https://rmarkdown.rstudio.com/
Amy Orben & Andrew K. Przybylski, ‘The Association between Adolescent Well-Being and Digital Technology Use’, Nature Human Behaviour 3, no. 2 (Feb. 2019): pp. 173–82; https://doi.org/10.1038/s41562-018-0506-1. Full disclosure: Orben and Przybylski are friends and colleagues of mine.
Florian Markowetz, ‘Five Selfish Reasons to Work Reproducibly’, Genome Biology 16:274 (Dec. 2015); https://doi.org/10.1186/s13059-015-0850-7
See Marcus R. Munafò & George Davey Smith, ‘Robust Research Needs Many Lines of Evidence’, Nature 553, no. 7689 (25 Jan. 2018): pp. 399–401; https://doi.org/10.1038/d41586-018-01023-3; and Debbie A. Lawlor et al., ‘Triangulation in Aetiological Epidemiology’, International Journal of Epidemiology 45, no. 6 (20 Jan. 2017): pp. 1866–86; https://doi.org/10.1093/ije/dyw314.