Some thoughts on persuasion

train wreck

A friend asked a question about whether there is research on whether some people are more receptive to some communication styles and more resistant to others.

And there short answer is: a lot. There are scholars working on that question in advertising, political communication, health communication, political psychology, social psychology, argumentation, cognitive psychology, logic, interpersonal communication. Hell, Aristotle makes claims about what styles are more appropriate for various audiences (and rhetors).

These different scholars don’t all come to the same conclusions, and that’s interesting. My crank theory is that it isn’t because one group is more scientific than another, but because it depends upon whether we’re thinking about persuasion as a rhetor (Chester) who is trying to get someone (Hubert) to believe something new or change his mind on something (“compliance-gaining”), Hubert is looking at a lot of data and trying to figure out what to make of it (“reasoning” or “self-persuasion”), Chester is trying to strengthen Huber’s commitment to a belief, group, policy agenda (“confirmation”) so much so that Hubert might be willing to engage in actions more aggressive or extreme than before (“mobilizing” or “radicalizing”), Hubert and Chester together are trying to figure out the best course of action (“deliberating”).

Because of how research tends to work, people usually examine or set up (in the case of lab research) scenarios that looks at only one of those kinds of persuasion. Of course, in the wild, it’s all of them, sometimes fairly mixed up. So, the research doesn’t always apply neatly to how persuasion actually works (or doesn’t).

A lot of the research doesn’t pose the question the way my friend did—they draw conclusions about ways that people are persuaded, rather than beginning with the reasonable hypothesis that individuals don’t all respond the same way, and that people might have styles of reasoning that would make them more or less receptive to styles of communication. Still and all, some of that work turns up interesting data, such as that people tend to prefer teleological explanations of historical or physical events/phenomena. (We don’t like chance.) (Right now I’m working on the rhetoric of counterfactuals, and there’s some interesting work about that—it also turns up in scholarship on why people keep trying to make evolution into a teleological process.)

It’s common for people to cite studies that conclude that people aren’t persuaded by studies.

Think about that. People who are persuaded that people aren’t persuaded by studies cite studies to others to show they’re right. That’s a performative contradiction.

I think that contradiction happens because we know that people aren’t necessarily persuaded to change their mind about X by having a study (or set of studies) cited at them, but we also know that having studies cited might be a set of datapoints on one side of a scale. Persuasion on big issues happens slowly and cumulatively. People who’ve changed their minds on big issues often describe a long process, with a variety of kinds of data—studies, logic, personal experience, narratives (fiction or film), in-group shifts. Kenneth Burke long ago pointed out that repetition is an important method of persuasion—even repetition of an outright lie or logically indefensible claim (he was talking about Hitler). Repetition as persuasion is a basis of much (most?) advertising.

I think some of the most useful work on persuasion is in the work on cognitive biases. People who are prone to binary thinking are more likely to be persuaded by arguments that can be presented as a binary; people drawn to cognitive closure like arguments that deny uncertainty or complexity. (When frightened, most everyone likes simple binaries—that’s a Trish crank theory.)

In addition to binary thinking, I think a few other really important biases are: confirmation bias, in-group favoritism, and naïve realism.

Confirmation bias is pretty much what it says on the label. People are more likely to believe something that confirms what they already believe. We will hold studies, arguments, claims, and so on to different standards: lower standards of proof/logic for what confirms what we already believe, and higher standards for something we don’t believe. That isn’t necessarily a terrible way to go through life—Kahneman (who did a lot of the great work on cognitive biases) argued that we probably should do that for most of getting through the day. But, on important issues, we need to find ways to minimize that bias.

Confirmation bias also works at a slightly more abstract level—we are more likely to believe a narrative, explanation, judgment, cause-effect argument, and so on if it confirms a pattern we believe is how the world works. If, for instance, we are authoritarians, then we’re more likely to be persuaded by an argument that presumes or advocates authoritarianism.

The just world model is another example of that process. People who believe that everyone gets what they deserve are more likely to believe that a victim of a crime, accident, or disease did something to cause that crime, accident, or disease.

You can see how the just world model causes people to place blame on the reddit sub r/mildlybaddrivers all the time—it’s kind of funny the extent to which some people will strive to place blame on the victim. The more that we’re uncomfortable with the possibility that bad things can happen to people who’ve done nothing wrong—the more that we want to believe in a world we can control—the more we are drawn to a narrative that shows the accidents could have been prevented. We want to believe that accidents wouldn’t have happened to us.

It’s all about us.

In-group favoritism is well described here. Basically, we have a tendency to believe that the in-group (the group we’re in) is better than other groups, and therefore entitled to better treatment and more resources, the benefit of the doubt in conflicts, forgiveness (whereas out-group members should be punished for the same behavior), and just generally lower standards. We don’t see them as lower standards—we think “fairness” means better treatment for us and people like us. So, we’re more likely to be persuaded by narratives, arguments, explanations, and so on that favor our in-group. We’re likely to dismiss criticism of the in-group or in-group members as “biased.” We are likely to hold in-group rhetors and leaders to low (or no) standards of proof and reasonableness, especially if we’re in a charismatic leadership relationship with them.

The third, and related, bias that’s important for style of thinking and style of persuasion is “naïve realism.” “Naïve realism” refers to the belief that the world is exactly and completely as it appears to me. If you’re a binary thinker, then it would seem to be right, because you believe the only other possibility is that there is no reality at all. That’s like saying that this animal must be a cat because otherwise there are no categories of animals. We spend most of our day operating on the basis of naïve realism—that the world is as it looks—as we should. But, there are times we have to be open to the idea that the world looks different to others because they’re looking at it from a different perspective, that there are parts of the world we can’t see, and that we might even be misled by our own biases. We might be wrong.

You can see how someone who believed that they see the world without biases (not possible, by the way) would only pay attention to rhetors, information, narratives that confirm what they already believe.

All these things make being open to reasonable persuasion actively scary; we’re “open” to persuasion only if it fits what we already believe. So does authoritarianism, but that’s a different post.

The post Some thoughts on persuasion appeared first on Patricia Roberts-Miller.

 •  0 comments  •  flag
Share on Twitter
Published on October 01, 2025 09:48
No comments have been added yet.