More on this book
Community
Kindle Notes & Highlights
Started reading
February 26, 2021
book about false beliefs.17 How do we form beliefs—especially false ones? How do they persist? Why do they spread? Why are false beliefs so intransigent, even in the face of overwhelming evidence to the contrary? And, perhaps most important, what can we do to change them?
If you believe false things about the world, and you make decisions on the basis of those beliefs, then those decisions are unlikely to yield the outcomes you expect and desire.
The ability to share information and influence one another’s beliefs is part of what makes humans special.
the deliberate propagation of false or misleading information has exploded in the past century, driven both by new technologies for disseminating information—radio, television, the internet—and by the increased sophistication of those who would mislead
All of these sources of deliberately partial, misleading, and inaccurate information—from political propaganda, to politically motivated media, to scientific research shaped by industrial interests—play an important role in the origins and spread of false beliefs.
individually rational agents can form groups that are not rational at all.28
Since the early 1990s, our social structures have shifted dramatically away from community-level, face-to-face interactions and toward online interactions.
Arguably, it is the abundance of information, shared in novel social contexts, that underlies the problems we face.
One of the major themes of the science wars was an accusation that humanists writing on science were pseudointellectual poseurs.
We seek to hold beliefs that are “true” in the sense of serving as guides for making successful choices in the future; we generally expect such beliefs to conform with and be supported by the available evidence.
the real threat is from those people who would manipulate scientific knowledge for their own interests or obscure it from the policy makers who might act on it.
Isaac Newton sank into paranoia and insanity at the end of his brilliant life—likely a result of his experiments with mercury. (Posthumous hair samples revealed highly elevated levels of it.)
Once scientists start to share evidence, however, it becomes extremely likely that they will all come to believe the same thing, for better or worse.
Notably, this means that a successful new belief can spread in a way that would not have been very likely without the ability to share evidence.
This happens when a few scientists get a string of misleading results and share them with their colleagues. Scientists who might have been on track to believe the true thing can be derailed by their peers’ misleading evidence. When this happens, the scientists would have been better off not getting input from others.
This trade-off, where connections propagate true beliefs but also open channels for the spread of misleading evidence, means that sometimes it is actually better for a group of scientists to communicate less, especially when they work on a hard problem.
Jeffrey’s rule takes into account an agent’s degree of uncertainty about some piece of evidence when determining what the agent’s new credence should be.
Now, instead of steadily trending toward a consensus, either right or wrong, scientists regularly split into polarized groups holding different beliefs, with each side trusting the evidence of only those who already agree with them.
they update on any evidence that comes from a trusted source. Even if people behave very reasonably upon receiving evidence from their peers, they can still end up at odds.
The take-away is that if we want to develop successful scientific theories to help us anticipate the consequences of our choices, mistrusting those with different beliefs is toxic. It can create polarized camps that fail to listen to the real, trustworthy evidence coming from the opposite side. In general, it means that a smaller proportion of the community ultimately arrives at true beliefs.
when assessing evidence from others, it is best to judge it on its own merits, rather than on the beliefs of those who present it.
“conformity bias.”
While conformity seems to vary across cultures and over time, it reflects two truths about human psychology: we do not like to disagree with others, and we often trust the judgments of others over our own.
Conformity bias, meanwhile, reflects the fact that completely separately from our rational judgments, we simply do not like to stick out from a pack.
The research on conformity bias suggests that we care about more than just the best action. At least in some settings, it seems we also care about agreeing with other people. In fact, in some cases we are prepared to deny our beliefs, or the evidence of our senses, to better fit in with those around us.
Adding conformity to the model also creates the possibility of stable, persistent disagreement about which theory to adopt.
There was almost no cost to believing the wrong thing. This means that any desire to conform could swamp the costs of holding a false belief.
social psychologists have shown that when monetary incentives are offered to those who get the correct answer in the Asch test, conformity is less prevalent.)
espousing one view or the other can have significant social benefits, depending on whom we wish to conform with.
these eating practices can signal membership among new-age, elite, or left-wing social groups, and thus bring social benefits. These same communities sometimes promote wackier ideas—such as the recent fad of “grounding,” based on claims that literally touching the ground provides health benefits as a result of electron transfer between the body and earth.
Thinking of false beliefs as social signals makes the most sense when we have cliquish networks like those in figure
In our polarization models, social influence fails because individuals stop trusting each other. In the conformity models, we see an outcome that, practically speaking, looks the same as polarization because everyone tries to conform with everyone else, but some people just do not interact very often. A glimpse back at figures 4 and 6 will make clear just how different these two outcomes really are.
the conformity case, disturbing people’s social networks and connecting them with different groups should help rehabilitate those with false beliefs. But when people polarize because of mistrust, such an intervention would generally fail—and it might make polarization worse.
we use the beliefs of others to ground our judgment of the evidence they share, we can learn to ignore those who might provide us with crucial information. When we try to conform to others in our social networks, we sometimes ignore our best judgment when making decisions, and, in doing so, halt the spread of true belief.
As Oreskes and Conway document in Merchants of Doubt, the key idea behind the revolutionary new strategy—which they call the “Tobacco Strategy”—was that the best way to fight science was with more science.
The goal was rather to create the appearance of uncertainty: to find, fund, and promote research that muddied the waters, made the existing evidence seems less definitive,
“Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the public.”
Like scientists, policy makers have beliefs, and they use Bayes’ rule to update them in light of the evidence they see. But unlike scientists, they do not produce evidence themselves and so must depend on the scientific network to learn about the world.
this agent is not interested in identifying the better of two actions. This agent aims only to persuade the policy makers that action A is preferable—even though, in fact, action B is. Figure 9 shows the model with this agent. Propagandists do not update their beliefs, and they communicate with every policy maker.
We find that this strategy can drastically influence policy makers’ beliefs. Often, in fact, as the community of scientists reaches consensus on the correct action, the policy makers approach certainty that the wrong action is better. Their credence goes in precisely the wrong direction.
Notice that in this model, the propagandist does not fabricate any data. They are performing real science, at least in the sense that they actually perform the experiments they report, and they do so using the same standards and methods as the scientists. They just publish the results selectively.
since the propagandist shares only those results that support the worse theory, their influence will always push policy makers’ beliefs the other way.
Surprisingly, the propagandist will be most effective if they run and publicize the most studies with as few data points as possible.
Selective sharing involves searching for and promoting research that is conducted by independent scientists, with no direct intervention by the propagandist, that happens to support the propagandist’s interests.
The propagandist does not do science. They just take advantage of the fact that the data produced by scientists have a statistical distribution, and there will generally be some results suggesting that the wrong action is better.
the propagandist pulls no longer depends on how much money they can devote to running their own studies, but only on the rate at which spurious results appear in the scientific community.
the effectiveness of selective sharing depends on the details of the problem in question. If scientists are gathering data on something where the evidence is equivocal—say, a disease in which patients’ symptoms vary widely—there will tend to be more results suggesting that the wrong action is better. And the more misleading studies are available, the more material the propagandist has to publicize.
The lower the scientific community’s standards, the easier it is for the propagandist in the tug-of-war for public opinion.
Our models suggest that it is better to give large pots of money to a few groups, which can use the money to run studies with more data, than to give small pots of money to many people who can each gather only a few data points. The latter distribution is much more likely to generate spurious results for the propagandist.
Perhaps the best option is to fund many scientists, but to publish their work only in aggregation, along with an assessment of the total body of evidence.

