Tom Stafford's Blog, page 2
February 25, 2018
A graph that is made by perceiving it
The contrast sensitivity function shows how our sensitivity to contrasts is affected by spatial frequency. You can test it using gratings of alternating light and darker shade. Ian Goodfellow has this neat observation:
By looking at this image, you can see how sensitive your own eyes are to contrast at different frequencies (taller apparent peaks=more sensitivity at that frequency). It's like a graph that is made by perceiving the graph itself. h/t @catherineols https://t.co/3WosnCfbX1 pic.twitter.com/Io7OxAmrB0
— Ian Goodfellow (@goodfellow_ian) February 24, 2018
It’s a graph that makes itself! The image is the raw data, and by interacting with your visual system, you perceive a discontinuity which illustrates the limits of your perception.
Spatial frequency means how often things change in space. High spatial frequency changes means lots of small detail. Spatial frequency is surprisingly important to our visual system – lots of basic features of the visual world, like orientation or motion, are processed first according to which spatial frequency the information is available at.
Spatial frequency is behind the Einstein-Marilyn illusion, whereby you see Albert Einstein if the image is large or close up, and Marilyn Monroe if the image is small / seen from a distance (try it! You’ll have to walk away from your screen to see it change).
[image error]The Einstein Monroe was created by Dr. Aude Oliva at MIT for the March 31st 2007 issue of New Scientist magazine
Depending on distance, different spatial frequencies are easier to see, and if those spatial frequencies encode different information then you can make a hybrid image which switches as you alter your distance from it.
Spatial frequency is also why, when you’re flying over the ocean, you can see waves which appear not to move. Although you vision is sensitive enough to see the wave, the motion sensitive part of your visual system isn’t as good at the fine spatial frequencies – which creates a natural illusion of static waves.
The contrast sensitivity image at the head of this post varies contrast top to bottom (low to high) and spatial frequency left to right (low to high). The point at which the bars stop looking distinct picks out a ridge which rises (to a maximum at about about 10 cycles per degrees of angle) and then drops off. Where this ridge is will vary depending on your particular visual system and what distance you view the image at. It is the ultimate individualised data visualisation – it picks out the particular sensitivity of your own visual system, in real time. It’s even interactive, instantly adjusting for momentary changes in parameters like brightness!
More on hybrid images (including some neat examples): Oliva, A., Torralba, A., & Schyns, P. G. (2006, July). Hybrid images. In ACM Transactions on Graphics (TOG) (Vol. 25, No. 3, pp. 527-532). ACM.
More on the visual system, including the contrast sensitivity function: Frisby, J. P., & Stone, J. V. (2010). Seeing: The computational approach to biological vision. The MIT Press.
February 7, 2018
How To Become A Centaur
[image error]Nicky Case (of Explorable Explanations and Parable of the Polygons internet fame) has a fantastic essay which picks up on the theme of my last Cyberselves post – technology as companion, not competitor.
In How To Become A Centaur Case gives blitz history of AI, and of its lesser known cousin IA – Intelligence Augmentation. The insight that digital technology could be a a ‘bicycle for the mind’ (Steve Jobs’ quote) gave us the modern computer, as shown in the 1968 Mother of All Demos which introduced the world to the mouse, hypertext, video conferencing and collaborative working. (1968 people! 1968! As Case notes, 44 years before google docs, 35 years before skype).
We’re living in the world made possible by Englebart’s demo. Digital tools, from mere phones to the remote presence they enable, or the remote action that robots are surely going to make more common, and as Case says:
a tool doesn’t “just” make something easier — it allows for new, previously-impossible ways of thinking, of living, of being.
And the vital insight is that the future will rely on identifying the strengths and weakness of natural and artificial cognition, and figuring out how to harness them together. Case again:
When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.
It’s the “+”.
The article is too good to try to summarise. Read the full text here
Cross-posted at the Cyberselves blog.
Previously: Tools, substitutes or companions: three metaphors for thinking about technology, Cyberselves: How Immersive Technologies Will Impact Our Future Selves
January 26, 2018
Debating Sex Differences: Talk transcript
[image error]A talk I gave titled “Debating Sex Differences in Cognition: We Can Do Better” now has a home on the web.
The pages align a rough transcript of the talk with the slides, for your browsing pleasure.
Mindhacks.com readers will recognise many of the slides, which started their lives as blog posts. The full series is linked from this first post: Gender brain blogging. The whole thing came about because I was teaching a graduate discussion class on Cordelia Fine’s book, and then Andrew over at psychsciencenotes invited me to give a talk about it.
Here’s a bit from the introduction:
I love Fine’s book. I think of it as a sort of Bad Science but for sex differences research. Part of my argument in this talk is that Fine’s book, and reactions to it, can show us something important about how psychology is conducted and interpreted. The book has flaws, and some people hate it, and those things too are part of the story about the state of psychological research.
More here
January 2, 2018
The backfire effect is elusive
[image error]The backfire effect is when correcting misinformation hardens, rather than corrects, someone’s mistaken belief. It’s a relative of so called ‘attitude polarisation’ whereby people’s views on politically controversial topics can get more, not less, extreme when they are exposed to counter-arguments.
The finding that misperception are hard to correct is not new – it fits with research on the tenacity of beliefs and the difficulty of debunking.
The backfire effect appears to give an extra spin on this. If backfire effects hold, then correcting fake news can be worse than useless – the correction could reinforce the misinformation in people’s minds. This is what Brendan Nyhan and Jason Reifler warned about in a 2010 paper ‘When Corrections Fail: The Persistence of Political Misperceptions’.
Now, work by Tom Wood and Ethan Porter suggests that backfire effects may not be common or reliable. Reporting in their ‘The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence’ they exposed over 10,000 mechanical turk participants, over 5 experiments and 52 different topics, to misleading statements from American politicians from both of the two main parties. Across all statements, and all experiments, they found that showing people corrections moved their beliefs away from the false information. There was an effect of the match between the ideology of the participant and of the politician, but it wasn’t large:
Among liberals, 85% of issues saw a significant factual response to correction, among moderates, 96% of issues, and among conservatives, 83% of issues. No backfire was observed for any issue, among any ideological cohort
All in all, this suggests, in their words, that ‘The backfire effect is far less prevalent than existing research would indicate’. Far from being counter-productive, corrections work. Part of the power of this new study is that it uses the same materials and participants as the 2010 paper reporting backfire effects – statements about US politics and US citizens. Although the numbers mean the new study in convincing, it doesn’t show the backfire effect will never occur, especially for different attitudes in different contexts or nations.
So, don’t give up on fact checking just yet – people are more more reasonable about their beliefs than the backfire suggests.
Original paper: Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303-330.
New studies: Wood, T., & Porter, E. (in press). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behaviour.
The news is also good in a related experiment on fake news by the same team: Sex Trafficking, Russian Infiltration, Birth Certificates, and Pedophilia: A Survey Experiment Correcting Fake News. Regardless of ideology or content of fake news, people were responsive to corrections.
Read more about the psychology of responsiveness to argument in my ‘For argument’s sake: evidence that reason can change minds’.


Open Science Essentials: Reproducibility
Open science essentials in 2 minutes, part 3
Let’s define it this way: reproducibility is when your experiment or data analysis can be reliably repeated. It isn’t replicability, which we can define as reproducing an experiment and subsequent analysis and getting qualitatively similar results with the new data. (These aren’t universally accepted definitions, but they are common, and enough to get us started).
Reproducibility is a bedrock of science – we all know that our methods section should contain enough detail to allow an independent researcher to repeat our experiment. With the increasing use of computational methods in psychology, there’s increasing need – and increasing ability – for us to share more than just a description of our experiment or analysis.
Reproducible methods
Using sites like the Open Science Framework you can share stimuli and other materials. If you use open source experiment software like PsychoPy or Tatool you can easily share the full scripts which run your experiment and people on different platforms and without your software licenses can still run your experiment.
Reproducible analysis
Equally important is making your analysis reproducible. You’d think that with the same data, another person – or even you in the future – would get the same results. Not so! Most analyses include thousands of small choices. A mis-step in any of these small choices – lost participants, copy/paste errors, mis-labeled cases, unclear exclusion criteria – can derail an analysis, meaning you get different results each time (and different results from what you’ve published).
Fortunately a solution is at hand! You need to use analysis software that allows you to write a script to convert your raw data into your final output. That means no more Excel sheets (no history of what you’ve done = very bad – don’t be these guys) and no more point-and-click SPSS analysis.
Bottom line: You must script your analysis – trust me on this one
Open data + code
You need to share and document your data and your analysis code. All this is harder work than just writing down the final result of an analysis once you’ve managed to obtain it, but it makes for more robust analysis, and allows someone else to reproduce your analysis easily in the future.
The most likely beneficiary is you – you most likely collaborator in the future is Past You, and Past You doesn’t answer email. Every analysis I’ve ever done I’ve had to repeat, sometimes years later. It saves time in the long run to invest in making a reproducible analysis first time around.
Further Reading
Nick Barnes: Publish your computer code: it is good enough
British Ecological Society: Guide to Reproducible Code
Gael Varoquaux : Computational practices for reproducible science
Advanced
Reproducible Computational Workflows with Continuous Analysis
Part of a series for graduate students in psychology.
Part 1: pre-registration.
Part 2: the Open-Science Framework.
Part 3: Reproducibility


December 25, 2017
The Human Advantage
[image error]In ‘The Human Advantage: How Our Brains Became Remarkable’, Suzana Herculano-Houzel weaves together two stories: the story of her scientific career, based on her invention of a new technique for counting the number of brain cells in an entire brain, and the story of human brain evolution.
Previously counts of neurons in brains of humans and other animals relied on sampling: counting the cells in a slice of tissue and multiplying up to get an estimate. Because of differences in cell types and numbers across brain regions, these estimates are uncertain. Herculano-Houzel’s technique involves liquidizing a whole brain or brain region so that a sample of this homogeneous mass can yield reliable estimates of total cell count. Herculano-Houzel calls it “brain soup”.
The Human Advantage is the story of her discovery and the collaborations that led her to apply the technique to rodent, primate and human brains, and eventually to everything from giraffes to elephants.
Along the way she made various discoveries that contradict received wisdom in neuroscience:
– most species (including rodents primates) have 80% of the neurons in the cerebellum
– humans have about 86 billion neurons (16.3 billion in cerebral cortex), which is a missing 14 billion neurons compared to the conventional estimate.
– you can’t compare brain size to count brain cells. Because the cell volume changes with body size, some species with bigger brains have fewer neurons, and species with the same size brains can have vastly different neuron counts.
Example 1
* The capybara (a rodent), cerebral cortex has a weight of 48.2g and 306 million neurons
* The bonnet monkey (a primate), cerebral cortex has a weight of 48.3g and 1.7 billion neurons
Example 2
* African elephant, body mass 5000 kg, brain mass 4619g, 5.6 billion cerebral cortex neurons
* Human, body mass 70 kg, brain mass 1509g, 16.3 billion cerebral cortex neurons
(Fun fact:elephant neurons are 98% in the cerebellum – possibly because of the evolution of the trunk).
A lot of the book is concerned with relative as well as absolute numbers of brain cells. A frequent assumption is that humans must have more cortex relative to the rest of their brain, or more prefrontal cortex relative to the rest of the cortex. This is not true, says Herculano-Houzel’s research. The exception in nature is primates, who show a greater density of neurons per gram of brain mass and more energetically efficient neurons in terms of metabolic requirement per neuron. Humans are no exception to the scaling laws that govern primates, but we are particularly large (a caveat is great apes, who have larger bodies than us, but smaller brains, departing from the body-brain scaling law that govern humans and other primates). Our cognitive exceptionalism is based on raw number of brain cells in the cortex – that’s the human advantage.
This is a book which blends a deep look into comparative neuroanatomy and the evolutionary story of the brain with the specific research programme of one scientist. It shows how much progress in science depends on technological innovation, hard work, a bit of luck, social connections and thoughtful integration of the ideas of others. A great book – mindhacks.com recommends!


December 23, 2017
Conspiracy theories as maladaptive coping
[image error]A review called ‘The Psychology of Conspiracy Theories‘ sets out a theory of why individuals end up believing Elvis is alive, NASA faked the moon landings or 9/11 was an inside job. Karen Douglas and colleagues suggest:
Belief in conspiracy theories appears to be driven by motives that can be characterized as epistemic (understanding one’s environment), existential (being safe and in control of one’s environment), and social (maintaining a positive image of the self and the social group).
In their review they cover evidence showing that factors like uncertainty about the world, lack of control or social exclusion (factors affecting epistemic, existential and social motives respectively) are all associated with increased susceptibility to conspiracy theory beliefs.
But also they show, paradoxically, that exposure to conspiracy theories doesn’t salve these needs. People presented with pro-conspiracy theory information about vaccines or climate change felt a reduced sense of control and increased disillusion with politics and distrust of government. Douglas’ argument is that although individuals might find conspiracy theories attractive because they promise to make sense of the world, they actually increase uncertainty and decrease the chance people will take effective collective action.
My take would be that, viewed like this, conspiracy theories are a form of maladaptive coping. The account makes sense of why we are all vulnerable to conspiracy theories – and we are all vulnerable; many individual conspiracy theories have very widespread subscription – for example half of Americans believe Lee Harvey Oswald did not act alone in the assassination of JFK. Of course polling about individual beliefs must underestimate the proportion of individuals who subscribe to at least one conspiracy theory. The account also makes sense of why some people are more susceptible than others – people who have less education, are more excluded or powerless and have a heightened need to see patterns which aren’t necessarily there.
There are a few areas where this account isn’t fully satisfying.
– it doesn’t really offer a psychologically grounded definition of conspiracy theories. Douglas’s working definition is ‘explanations for important events that involve secret plots by powerful and malevolent groups’, which seems to include some cases of conspiracy beliefs which aren’t ‘conspiracy theories’ (sometimes it is reasonable to believe in secret plots by the powerful; sometimes the powerful are involved in secret plots), and it seems to miss some cases of conspiracy-theory type reasoning (for example paranoid beliefs about other people in your immediate social world).
– one aspects of conspiracy theories is that they are hard to disprove, with, for example, people presenting contrary evidence seem as confirming the existence of the conspiracy. But the common psychological tendency to resist persuasion is well known. Are conspiracy theories especially hard to shift, any more than other beliefs (or the beliefs of non-conspiracy theorists)? Would it be easier to persuade you that the earth is flat than it would be to persuade a flat-earther that the earth is round? If not, then the identifying mark of conspiracy theories may be the factors that lead you to get into them, rather that their dynamics when you’ve got them.
– and how you get into them seems crucially unaddressed by the experimental psychology methods Douglas and colleagues deploy. We have correlational data on the kinds of people who subscribe to conspiracy theories, and experimental data on presenting people with conspiracy theories, but no rich ethnographic account of how individuals find themselves pulled into the world of a conspiracy theory (or how they eventually get out of it).
Further research is, as they say, needed.
Reference: Douglas, K., Sutton, R. M., & Cichocka, A. (2017). The psychology of conspiracy theories. Current Directions in Psychological Science, 26 (6), 538-542.
Karen Douglas’ homepage
Previously on mindhacks.com: Conspiracy theory as character flaw, That’s what they want you to believe. Conspiracy theory page on mindhacks wiki.
I saw Karen Douglas present this work at a talk to Sheffield Skeptics in the Pub. Thanks to them for organising.


December 14, 2017
Cyberselves: How Immersive Technologies Will Impact Our Future Selves
[image error]We’re happy to announce the re-launch of our project ‘Cyberselves: How Immersive Technologies Will Impact Our Future Selves’. Straight out of Sheffield Robotics, the project aims to explore the effects of technology like robot avatars, virtual reality, AI servants and other tech which alters your perception or ability to act. We’re interested in work, play and how our sense of ourselves and our bodies is going to change as this technology becomes more and more widespread.
We’re funded by the AHRC to run workshops and bring our roadshow of hands on cyber-experiences to places across the UK in the coming year. From the website:
Cyberselves will examine the transforming impact of immersive technologies on our societies and cultures. Our project will bring an immersive, entertaining experience to people in unconventional locations, a Cyberselves Roadshow, that will give participants the chance to transport themselves into the body of a humanoid robot, and to experience the world from that mechanical body. Visitors to the Roadshow will also get a chance to have hands-on experiences with other social robots, coding and virtual/augmented reality demonstrations, while chatting to Sheffield Robotics’ knowledgeable researchers.
The project is a follow-up to our earlier AHRC project, ‘Cyberselves in Immersive Technologies‘, which brought together robotics engineers, philosophers, psychologists, scholars of literature, and neuroscientists.
We’re running a workshop on the effects of teleoperation and telepresence, in Oxford in February (Link).
Call for papers: symposium on AI, robots and public engagement at 2018 AISB Convention (April 2018).
Project updates on twitter, via Dreaming Robots (‘Looking at robots in the news, films, literature and the popular imagination’).
Full disclosure: This is a work gig, so I’m effectively being paid to write this


December 6, 2017
Scientific Credibility and The Kardashian Index
The Kardashian index is a semi-humorous metric invented to the reveal how much trust you should put in a scientist with a public image.
In ‘The Kardashian index: a measure of discrepant social media profile for scientists‘, the author writes:
I am concerned that phenomena similar to that of Kim Kardashian may also exist in the scientific community. I think it is possible that there are individuals who are famous for being famous
and
a high K-index is a warning to the community that researcher X may have built their public profile on shaky foundations, while a very low K-index suggests that a scientist is being undervalued. Here, I propose that those people whose K-index is greater than 5 can be considered ‘Science Kardashians’
[image error]Figure 1 from Hall, N. (2014). The Kardashian index: a measure of discrepant social media profile for scientists. Genome biology, 15(7), 424.
Your Kardashian index is calculated from your number of twitter followers and the number of citations your scholarly papers have. You can use the ‘Kardashian Index Calculator‘ to find out your own Kardashian Index, if you have a twitter account and a Google Scholar profile.
The implication of the Kardashian index is that the Foundation of someone’s contribution to public debate about science is their academic publishing. But public debate and scholarly debate are rightfully different things, even if related. To think that only scientists should be listened to in public debate is to think that other forms of skill and expertise aren’t relevant, including the skill of translating between different domains of expertise.
Communicating scientific topics, explaining and interpreting new findings and understanding the relevance of science to people’s lives and of people’s lives to science are skills in itself. The Kardashian Index ignores that, and so undervalues it.
Full disclosure: My Kardashian Index is 25.


November 9, 2017
Open Science Essentials: The Open Science Framework
Open science essentials in 2 minutes, part 2
The Open Science Framework (osf.io) is a website designed for the complete life-cycle of your research project – designing projects; collaborating; collecting, storing and sharing data; sharing analysis scripts, stimuli, results and publishing results.
You can read more about the rationale for the site here.
Open Science is fast becoming the new standard for science. As I see it, there are two major drivers of this:
1. Distributing your results via a slim journal article dates from the 17th century. Constraints on the timing, speed and volume of scholarly communication no longer apply. In short, now there is no reason not to share your full materials, data, and analysis scripts.
2. The Replicability crisis means that how people interpret research is changing. Obviously sharing your work doesn’t automatically make it reliable, but since it is a costly signal, it is a good sign that you take the reliability of your work seriously.
You could share aspects of your work in many ways, but the OSF has many benefits
the OSF is backed by serious money & institutional support, so the online side of your project will be live many years after you publish the link
It integrates with various other platform (github, dropbox, the PsyArXiv preprint server)
Totally free, run for scientists by scientists as a non-profit
All this, and the OSF also makes easy things like version control and pre-registration.
Good science is open science. And the fringe benefit is that making materials open forces you to properly document everything, which makes you a better collaborator with your number one research partner – your future self.
Cross-posted at tomstafford.staff.shef.ac.uk. Part of a series aimed at graduate students in psychology. Part 1: pre-registration.


Tom Stafford's Blog
- Tom Stafford's profile
- 13 followers
