Believing the unlikely
We often want to know how likely something is. I might want to know how likely it is that it will rain tomorrow morning; according to the Met Office website, at time of writing, this is 90% likely. If I’m playing poker, I might want to know how likely it is that my opponent has been dealt a “high card” hand. With no other information available, we can calculate the probability of this to be about 50%. There seems to be close link between likelihood and belief; if something is likely, you would be justified in believing it, and if something is unlikely, you would not be justified in believing it. That might seem obvious, and the second part might seem especially obvious – surely you can’t be justified in believing something that’s unlikely to be true? I will suggest here that, sometimes, you can.
Some philosophers and psychologists have argued that it can sometimes be a good thing for people to hold irrational beliefs. That may be right, but it’s not the point I want to make. I think that believing the unlikely can not only be a good thing, but can be fully rational. The reason has to do with testimony, and when we owe it to other people to believe what they say. So much of what we believe about the world is based on the testimony of others. And, while we don’t always accept what others tell us, disbelieving a person’s testimony is not something to be taken lightly. If someone tells me, say, that there’s a coffee shop nearby and, for no reason at all, I refuse to believe it, then this is not just about me – another person is involved, and I’m doing that person a wrong. If I refuse to believe what someone tells me, then I need to have a good reason for doing this.
What does this have to do with believing the unlikely? That’s best illustrated with an example: Suppose I rush back to my car one evening, after my parking has expired, expecting a ticket. When I arrive, however, I discover that some kind stranger has paid for additional parking. When I ask in a nearby shop, the shopkeeper tells me that he saw someone feeding the meter, and the person was wearing glasses. Naturally, I believe what the shopkeeper says, and it’s entirely rational for me to believe this.

Suppose I then remember reading that contact lenses are really common in my town and only 1% of the population are glasses-wearers. This will lower the probability that the kind stranger wore glasses, but I wouldn’t stop believing this. I’d think “That’s good – the shopkeeper’s information could help me find this person” not “That’s bad – I shouldn’t believe the shopkeeper anymore.” If the shopkeeper’s testimony were false, there would have to be some explanation for this; if I discovered, for instance, that he suffers hallucinations, or has a memory problem, then that could provide an explanation and could make it legitimate to doubt the testimony. But I haven’t discovered anything like that. Furthermore, if the shopkeeper was telling me something far-fetched – that he’d seen aliens or could tell the future say – then, again, I might legitimately doubt the testimony. But there’s nothing far-fetched about what the shopkeeper is saying. Think about it this way: Suppose there are 100,000 people in the town, 1000 of whom wear glasses. One of these people is the kind stranger, and it might just as well be one of the glasses-wearers as one of the others. There’s nothing “hard to believe” about this.
Evidence about the proportion of glasses-wearers doesn’t affect the credibility of the testimony, but it will affect the probabilities – it’s what psychologists and statisticians would call “base rate” information. Let’s add some numbers: Suppose that, if the person wasn’t wearing glasses, there is a 1 in 50 chance that the shopkeeper would have mistakenly said s/he was, (because of a hallucination, false memory etc.). And suppose that, if the person was wearing glasses, then the shopkeeper would be sure to say that s/he was. We now have all we need to calculate the final probability that the person wore glasses – surprisingly, it comes to only about 33.5%. (For those who are interested, the probability can be calculated using Bayes’ Theorem and the Theorem of Total Probability.) It turns out I am believing the unlikely.
If I cared only about probabilities then I wouldn’t believe what the shopkeeper says – I might believe the exact opposite. But I don’t care only about probabilities. It’s still true that there would have to be some explanation if the shopkeeper’s testimony were false and, having no inkling of any such explanation, I owe it to him to accept it. The base rate information does not change this. Try to see it from the shopkeeper’s perspective – how would you feel if you had volunteered your testimony, only to have it dismissed because a low proportion of people wear glasses?
It’s well known that people tend to neglect base rates when estimating probabilities – something known as the base rate fallacy. In the example I’ve described, many people would downplay the base rate, and end up overestimating the probability that the person wore glasses. Maybe this example is just an illustration of the fallacy: We overestimate the likelihood that the person wore glasses and, because of the link between likelihood and belief, we (mistakenly) think that one should believe this. The base rate fallacy is a large topic and I won’t try to discuss it here but, suffice it to say, we could turn this kind of story on its head: Maybe one really should believe that the person wore glasses (as there’s credible testimony to that effect), and because we (mistakenly) think that there is a link between likelihood and belief, that partly leads us to overestimate the likelihood of this.
We often want to know how likely things are. We also want to know what we should believe. If I’m right, then answering the first question does not yet give us an answer to the second.
Featured image: Dice by Jacqui Brown. CC BY-SA 2.0 via Flickr.
The post Believing the unlikely appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
