More on this book
Community
Kindle Notes & Highlights
Started reading
February 26, 2021
Part of the problem is that papers showing a novel effect are easier to publish than those showing no effect.
The success of selective sharing is striking because, given that it is such a minimal intervention into the scientific process, arguably it is not an intervention at all. In fact, in some ways it is even more effective than biased production, for two reasons. One is that it is much cheaper: the industry does not need to fund science, just publicize it. It is also less risky, because propagandists who share selectively do not hide or suppress any results.
the mere rumor of new evidence was enough to derail the hearing.
the mere fact that certain scientists received industry funding can dramatically corrupt the scientific process.
This does two things. It floods the scientific community with results favorable to action A, changing the minds of many other scientists. And it also makes it more likely that new labs use the methods that are more likely to favor action A, which is better for industry interests. This is because researchers who are receiving lots of funding and producing lots of papers will tend to place more students in positions of influence. Over time, more and more scientists end up favoring action A over action B, even though action B is objectively superior.
industry is putting a finger on the scales by funding researchers it likes, and those researchers are thus more likely to gain funding from unbiased sources, the result is yet more science favoring industry interests.
It has since been estimated that their usage may have caused hundreds of thousands of premature deaths.
In these models, one member of the scientific network is a propagandist in disguise whose results are themselves biased.
An embedded propagandist of this sort can permanently prevent scientists from ever reaching a correct consensus. They
In the most extreme case, propagandists can bias the available evidence by producing their own research and then suppressing results that are unfavorable to their position. In all of these cases, the propagandist is effective precisely because they can shape the evidence we use to form our beliefs—and thus manipulate our actions.
succeed without manipulating any individual scientist’s methods or results, by biasing the way evidence is shared with the public, biasing the distribution of scientists in the network, or biasing the evidence seen by scientists.
trust and authority play crucial roles in shaping consumers’ actions and beliefs.
We are forced to rely on experts.
For precisely this reason, the authority of science and the reputations both of individual scientists and of science as an enterprise are prime targets for propagandists.
The propagandist’s message is most effective when it comes from voices we think we can trust.
For one of these problems, they are trying to choose between actions A and B; for the other, unrelated problem, they need to choose between actions Y and Z. Now suppose that for each problem, when deciding whom to trust, they consider their beliefs about both problems. The basic dynamics of the model are the same as before: each scientist uses Jeffrey’s rule, which, remember, specifies how to update beliefs when you are not certain about evidence, but now the uncertainty they assign to a person’s evidence depends on the distance between their beliefs on both topics.
other words, a scientist might think: “I am not so sure about actions A and B, but I am certain that action Z is better than Y. If another scientist shares my opinion on action Z, I will also trust that person’s evidence on actions A and B.”76
Conformity and network structure are also critical to understanding the social role of reputation in propaganda.
The spread of variolation had little to do with new knowledge about its success or safety. Instead, it was shunned and then adopted because of social pressures.
Those who hold true beliefs but who, from fear of social censure, do not share evidence supporting those beliefs, stop the spread of good ideas.
Propagandists who target those at the center of social stars can exploit our human tendencies to conform.
Each time they persuade one major influencer to adopt a belief or practice, they can depend on them to persuade others who seek to emulate them.
Once changed, the group is relatively stable.
the propagandist can exert influence by targeting members of a community who, for one reason or another, have special influence.
These disagreements themselves arise because people tend to trust different information sources:
Facebook reported that the 1.59 billion people active on its website are, on average, connected to one another by 3.59 degrees of separation.
Even if fake news is not new, it can now spread as never before. This makes it far more dangerous. But does anyone actually believe the outrageous stories that get posted, shared, and liked on social media?
One-third of the survey respondents recalled seeing at least one of the fake news headlines. Those who remembered the headline judged the fake news to be “very” or “somewhat” accurate 75 percent of the time. By contrast, respondents who recalled seeing the real news headlines judged them to be accurate 83 percent of the time.
23 percent admitted to having shared fake news—of whom 73 percent admitted that they did so unwittingly, discovering only later that the news was fake.
(The others claimed to have known that it was fake at the time but shared it anyway.) Of course, these results do not include participants who unwittingly shared fake news and never learned that it was fake, nor do they include those who would not admit to having been duped.
Novelty makes things salient, and salience sells papers and attracts clicks. It is what we care about. But for some subjects, including science as well as politics and economics, a novelty bias can be deeply problematic.
Focusing on only part of the available evidence is a good way to reach the wrong belief.
intervening to change this distribution will change where consumers of that evidence end up.
since few journalists relish being accused of bias, pressures remain for journalists to present both sides of disagreements (or at least appear to).
this case, the policy makers do tend to converge to the true belief when the scientists do. But this convergence is substantially slower for policy makers than for the scientists—and indeed, it is substantially slower than if the journalist had merely shared two random results from the scientific community. This is because we generally expect evidence favoring the true belief to appear more often. Sharing equal proportions of results going in both directions puts a strong finger on the scale in the wrong direction.
Ultimately, the mere existence of contrarians is not a good reason to share their views or give them a platform.
When we are trying to solve difficult problems, there will always be high-quality and convincing evidence that pushes us in both directions.
It is not, and should not be, journalists’ role to referee scientific disagreements; that is what peer review and the scientific process are for, precisely because expert judgment is often essential.
On the other hand, it most certainly is journalists’ job to investigate and question those purported matters of fact on which major domestic and foreign policies are based—including determining whether there is a scientific consensus on an issue relevant to policy-making.
Our point, rather, is that the mere existence of contrarians or (apparent) controversy is not itself a story, nor does it justify equal time for all parties to a disagreement. And the publication of a surprising or contrary-to-expectation research article is not in and of itself newsworthy.
the groups doing this aggregation consciously attempt to mislead journalists about their independence and credentials.
two essentially different tasks that reliable news sources perform. One involves investigating allegations, checking facts, and refuting false claims. This is an important activity—but it is also a risky one, because it can, counterintuitively, expand the reach of fake stories. In some cases, as with the Comet Ping Pong conspiracy, it can turn fake news into a real story. The other task involves identifying and reporting real stories of independent interest,
Russian-produced political content reached as many as 126 million US users.
emails have produced discord and mistrust—and in doing so, they have eroded an American political institution. Perhaps this was the point.
the goal was to get close, pose as a peer, and then exert influence.
social influence was used to push people to more extreme versions of the views they already held.
the community page mimics a star network, with the page creator at the center.
Rather than try to influence people already at the center of star networks, the Russian accounts appear to have created star networks by creating affinity groups structured so that they could communicate to the whole community more easily than members could communicate with one another.
the central assumption underlying our models of polarization was that people are more susceptible to the influence of those they trust—in particular, those who they think have formed reliable beliefs in the past.
So if someone wanted to convince you of something you are currently uncertain about, perhaps the best way for that person to do it is to first establish a commonality—a shared interest or belief. The

