More on this book
Community
Kindle Notes & Highlights
Read between
November 26 - November 30, 2021
When we open channels for social communication, we immediately face a trade-off. If we want to have as many true beliefs as possible, we should trust everything we hear. This way, every true belief passing through our social network also becomes part of our belief system. And if we want to minimize the number of false beliefs we have, we should not believe anything.
What we see in these models is that even perfectly rational—albeit simple—agents who learn from others in their social network can fail to form true beliefs about the world, even when more than adequate evidence is available. In other words, individually rational agents can form groups that are not rational at all.
As Hume himself put it, “A wise man . . . proportions his belief to the evidence.”
There, too, we can never be certain. But the possibility does not paralyze us, nor should it. We do not wait on absolute certainty—and we cannot, as it is certainly not forthcoming. We have little choice but to act. And when we do, our actions are informed by what we happen to believe—which is why we should endeavor to have beliefs that are as well-supported as possible.
The basic idea is that beliefs come in degrees, which measure, roughly, how likely we think something is to be true. And the evidence we gather can and should influence these degrees of belief. The character of that evidence can make us more or less confident.
Most work in philosophy of science before Kuhn viewed science as dispassionate and objective inquiry into the world.29 But if Kuhn was right that paradigms structure scientists’ worldviews and if all of our usual evidence gathering and analysis happens, by necessity, within a paradigm, then this picture was fatally flawed.30 The “evidence” alone could not lead us to scientific theories.
We seek to hold beliefs that are “true” in the sense of serving as guides for making successful choices in the future; we generally expect such beliefs to conform with and be supported by the available evidence.
This phenomenon, in which scientists improve their beliefs by failing to communicate, is known as the “Zollman effect,” after Kevin Zollman, who discovered it.
Suppose scientists tend to place greater trust in colleagues who have reached the same conclusions they have reached, and less in those who hold radically different beliefs. Again, this is not so unreasonable. We all tend to think we are good at evaluating evidence; it is only reasonable to think that those investigating similar problems, who have reached different conclusions, must not be doing it very well.
This small change to the model radically alters the outcomes. Now, instead of steadily trending toward a consensus, either right or wrong, scientists regularly split into polarized groups holding different beliefs, with each side trusting the evidence of only those who already agree with them.
We also find that the greater the distrust between those with different beliefs, the larger the fraction of the scientific community that eventually ends up with false beliefs. This happens because those who are skeptical of the better theory are precisely those who do not trust those who test it.
Notice that our agents do not engage in confirmation bias at all—they update on any evidence that comes from a trusted source. Even if people behave very reasonably upon receiving evidence from their peers, they can still end up at odds.
While conformity seems to vary across cultures and over time, it reflects two truths about human psychology: we do not like to disagree with others, and we often trust the judgments of others over our own.
“information cascade,”
Like other models we have looked at, models of information cascades reveal that individuals acting rationally—making the best judgments they can on the basis of the available evidence and their inferences about others’ beliefs based on behavior—can fall into a trap. A group in which almost every member individually would be inclined to make the right judgment might end up agreeing collectively on the wrong one.
It would be impossible, using any legitimate scientific method, to generate a robust and convincing body of evidence demonstrating that smoking is safe. But that was not the goal. The goal was rather to create the appearance of uncertainty: to find, fund, and promote research that muddied the waters, made the existing evidence seems less definitive, and gave policy makers and tobacco users just enough cover to ignore the scientific consensus.
As a tobacco company executive put it in an unsigned memo fifteen years later: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the public.”
Our only hope is to identify the tools by which our beliefs, opinions, and preferences are shaped, and look for ways to re-exert control—but the success of the Tobacco Strategy shows just how difficult this will be.
The Tobacco Strategy was many faceted, but there are a handful of specific ways in which the TIRC and similar groups used science to fight science.29 The first is a tactic that we call “biased production.”
Notice that in this model, the propagandist does not fabricate any data. They are performing real science, at least in the sense that they actually perform the experiments they report, and they do so using the same standards and methods as the scientists. They just publish the results selectively.
Surprisingly, the propagandist will be most effective if they run and publicize the most studies with as few data points as possible.
The propagandist finds the scientist whose methods are most favorable for the theory they wish to promote and gives that scientist enough money to increase his or her productivity. This does two things. It floods the scientific community with results favorable to action A, changing the minds of many other scientists. And it also makes it more likely that new labs use the methods that are more likely to favor action A, which is better for industry interests.
The propaganda strategies we have discussed so far all involve the manipulation of evidence. Either a propagandist biases the total evidence on which we make judgments by amplifying and promoting results that support their agenda; or they do so by funding scientists whose methods have been found to produce industry-friendly results—which ultimately amounts to the same thing.
As the tobacco industry has shown, merely creating the appearance of controversy is often all the propagandist needs to do.
Ultimately, fake news, unsubstantiated allegations, and innuendo can create interest in a story that then justifies investigations and coverage by more reliable sources. Even when these further investigations show the original allegations to be baseless, they spread the reach of the story—and create the sense that there is something to it.
Here is another manifestation of a theme that has come up throughout this book. Individual actions that, taken on their own, are justified, conducive to truth, and even rational, can have troubling consequences in a broader context. Individuals exercising judgment over whom to trust, and updating their beliefs in a way that is responsive to those judgments, ultimately contribute to polarization and the breakdown of fruitful exchanges. Journalists looking for true stories that will have wide interest and readership can ultimately spread misinformation. Stories in which every sentence is true
...more
The more local our politics is, the less chance for it to be dominated by distorting social effects of the sort that have emerged in recent years. This is because policies with local ramifications give the world more chances to push back.
A second possible intervention concerns our ability to construct social networks that minimize exposure to dissenting opinion and maximize positive feedback for holding certain beliefs, independent of their evidential support.
One general takeaway from this book is that we should stop thinking that the “marketplace of ideas” can effectively sort fact from fiction.
Through discussion, one imagines, the wheat will be separated from the chaff, and the public will eventually adopt the best ideas and beliefs and discard the rest. Unfortunately, this marketplace is a fiction, and a dangerous one. We do not want to limit free speech, but we do want to strongly advocate that those in positions of power or influence see their speech for what it is—an exercise of power, capable of doing real harm.
Vulgar democracy is the majority-rules picture of democracy, where we make decisions about what science to support, what constraints to place on it, and ultimately what policies to adopt in light of that science by putting them to a vote. The problem, he argues, is simple: most of the people voting have no idea what they are talking about. Vulgar democracy is a “tyranny of ignorance”—or, given what we have argued here, a tyranny of propaganda.
Before it can influence policy, hard-won knowledge is filtered through a population that cannot evaluate it—and which is easily manipulated. There is no sense in which the people’s preferences and values are well-represented by this system, and no sense in which it is responsive to facts. It is a caricature of democracy.
The challenge is to find new mechanisms for aggregating values that capture the ideals of democracy, without holding us all hostage to ignorance and manipulation.

