This is a question I used to ask my students, and only now realized I should ask FB friends. What’s a major political issue/narrative/belief/commitment on which you changed your mind, and what made you change your mind?
For me, there are so very many, and I’ll mention one. For reasons too complicated to explain, I ended up being the person sent with a dog to a dog training class. I was 12? It was all the (literally Nazi) dog training method of tricking a dog into behaving badly and then punishing it by yanking on the choke collar.
About 25 years later, I got two dogs, and read all sorts of studies and books and took classes. This was a moment in my life when I was seriously considering leaving academia and either becoming a dog trainer or a lawyer.
Being an academic, I researched the issue. Except for Ian Dunbar, there was almost no actual research on the issue of what dog training works. The dominant advice was still “you must dominate your dog.” I had a Malamute/Lab and a Dane/Shepherd mix and the dominance method only sort of sometimes worked with the Malamute/Lab (if you squinted), and didn’t work at all with the Dane/Shepherd. It was disastrous with him (Chester, for those of you who’ve known me for a while). Ian Dunbar’s advice worked with both, as did Vicki Hearne’s advice. Dunbar and Hearne were oriented toward getting your dog (or horse, in the case of Hearne) to do the right thing and then rewarding them.
Even the most “dominate your dog” rhetoric advised that you give your dog a job, and that was great advice–the only useful part of that whole approach.
So, I changed my mind on the whole “you must dominate your dog” approach, but not because I read one study, or had one conversation; it was because of a lot of things. The most important was that I cared enough about my dogs that I was willing to fling my theory of dog obedience out the window if it didn’t seem to be working for the dogs in front of me.
Only after my personal experience made me dubious did I look more carefully at the arguments and evidence for the dominance model. While that argument was familiar to me, and initially seemed normal, the more I looked at it, the more it was clear that they hadn’t actually done the kind of “research” that would have gotten an honorable mention in a 6th grade science fair.
Ian Dunbar’s advice was grounded in far better research than any of the alpha dog bullshit, although it was still just observational.
(In case you’re wondering, the whole alpha male thing is bullshit, although there is a good argument for a more “leadership” model.)
I mentioned I asked students about times that they changed their minds on a big issue (they didn’t have to tell me what the issue was, or narrate the process in any detail), and I generally got a similarly complicated narrative about a long process involving some studies, personal experience, noticing the flaws in in-group arguments. Sometimes it was a very dramatic life event, and sometimes a particularly good book or documentary.
I have said before, I think that we’re at a point when we need to persuade people who aren’t alarmed about what’s happening in a one-to-one way. I’m not sure how to do that. But I think it might be useful to think about how we were persuaded on big issues. (And, if you know me, you know that dog training is a big issue for me).
So, I think it might be helpful if we shared conversion narratives. Either yours, or references to famous ones.
If you don’t want your FB id (or name) associated with it, DM or email me, and I’ll post it without identifying information.
My hope is that we can come up with a better model of persuasion than what we get from psych studies or focus groups.
Photo from here: https://www.nytimes.com/2009/07/07/us/07mcnamara.html
Why does having a “reasonable” argument matter?
Some people are claiming that the reason so many people are supporting a political figure they dislike is that our education system is bad. And it is, but not in the ways people think. Our problem has long been that we teach argument, but not argumentation. An argument is a claim with a supporting reason (what Aristotle called an enthymeme); it’s a thing you fling at someone with whom you disagree. It’s very effective for making a person feel confirmed in what they already believe, and therefore also useful for confirming the beliefs of in-group members (or moving them very slightly), but it doesn’t really do much for helping people deliberate together about complicated and controversial problems and policies.
The most popular argument textbook confirms (see what I did there?) the false binary of the rational/irrational split—that one’s position on an issue might be rational (i.e., logical) or emotional (i.e., illogical). That split is itself illogical, and very much an emotional response (the desire to feel that one is rational, and to feel that others are irrational). The false assumption is that a “rational” (aka, “unemotional”) stance on an issue is “unbiased.” I’m not advocating that understanding of reasonable deliberation–I think it’s unmitigated bullshit.
The irony is that this way of describing how people think is wrong, as is shown by so very, very many studies. It is, logically, indefensible (but it feeeeels so good to think of oneself as “rational,” as having a viewpoint that is obviously right and objectively true).
Argumentation is a set of strategies that tries to help people disagree productively with one another (not necessarily nicely, unemotionally, persuasively, or in ways that make everyone comfortable), but the strategies are ways of correcting for the biases to which we’re all prone. Argumentation is oriented toward productive and inclusive deliberation, and not just coercion or what one scholar of rhetoric called rhetrickery.
Argumentation requires that participants (usually called interlocutors, a term I like since it sounds as though people are locked together) follow these rules: 1) there is an agreement on the “stasis”—what the hell we’re arguing about. (This rule prevents deflection, and various fallacies like motivism, ad hominem, ad baculum.) 2) all the rules (of logic, civility, citation practices, and so on) apply equally to all parties. (This rule ensures that it is good faith argumentation, rather than just a wanking performance to the in-group or another form of ad baculum.) 3) interlocutors engage the smartest and best opposition arguments. (This rule prevents another kind of deflection, as well as bad faith posturing in front of the in-group.) 4) interlocutors cite their sources when asked to provide them, and, as said above, hold their and opposition sources to the same standards of credibility. (In other words, “this is a good source because it agrees with me, or is in-group,” is not good faith argumentation. It’s performatively admitting that you’re full of shit.) 5) Assertions are not evidence, let alone proof. They’re just assertions. That someone can find a source that asserts that bunnies are not fluffy is not evidence that bunnies are not fluffy; it’s evidence that someone has asserted it. (Were I Queen of the Universe this is a distinction everyone would have to understand before they finished middle school.)
Notice that following these rules wouldn’t lead to a pleasant, comfortable, conflict-free discussion, and that someone who insisted on these rules might be seen as a person creating conflict.
This next paragraph is very pedantic. I’ve spent over forty years studying how communities make very bad decisions when they had all the information they needed to make better ones, and this is a list of the approaches to policy disagreements that go badly. The short version is that they engaged in various methods of argument and not argumentation.
There are a lot of ways that people imagine the ideal way that a community might make a decision. One is that everyone would advocate for their preferred course of action without disagreeing with anyone else (expressivist); another is that people would try to make the best case possible for their preferred policy ignoring all norms of ethics and the one that won the most adherents was the best (sloppy social Darwinism applied to decision-making), another is providing all the data necessary for the public to make a reasonable decision (dreamy informationalism), another is for an elite to decide what is best and to give the public (or their audience) the information that will gain their compliance (rhetorical authoritarianism), another is to provide “both sides” of an argument to people and see what they decide (expressive deliberation, sometimes called by scholars agonism).
I was once an advocate of agonism, but then I looked at how advocates of slavery talked themselves into a lot of bad decisions, and realized that a public sphere in which opposing arguments were expressed don’t do shit in terms of helping communities make good decisions. It can, in fact, foster fanatical commitments, especially if the disagreement about policies is falsely reframed as a conflict among identities (e.g., pro- v. anti-slavery). And, really, every disagreement about an admitted problem that is framed as a conflict between two identities (or a continnum between the two extremes) is gerfucked.
And so I abandoned agonism in favor of argumentation.
It’s important that I’m not advocating unemotional public discourse (which is neither possible nor desirables—demonizing the expression of feelings is also a contributor to train wrecks, but that’s a different post). Reasonable and emotional are not in conflict; if anything, they’re necessarily connected.
One of the reasons is that I realized that the various policy advocates who advocated ultimately disastrous policies refused to follow the rules of argumentation. Sometimes they did so calmly, with lots of data and quantification (e.g., McNamara), sometimes they did so dramatically and hyperbolically (e.g., Hitler). Their style, platform, set of policies, personal merits, ethical standards and all sorts of other things might be very different, but what was shared was that they couldn’t argue for their policies following the rules of argumentation because their policies were bad. Their arguments were paper tigers, that looked fierce attacking even frailer paper oppositions, and so often felt compelling, but they were bad arguments in favor of bad policies.
And that’s the important point. If you have good policies, you can engage in good argumentation. If you can’t engage in good argumentation, it might be because you have bad policies. There might be all sorts of other reasons (access to resources, for instance).
It isn’t that every individual has to be able to put forward a reasonable argument that engages the smartest opposition for every decision they (we) make at every moment. It isn’t even that every individual who supports a particular policy has to engage in reasonable argumentation in favor of it. But someone should. If there is a major public policy being advocated and no one can advocate it using reasonable argumentation, then it’s a bad policy.
[1] I’m being generous by saying someone is only fifty years late. In fact, various philosophers have noted many of the biases, such as in-group favoritism and confirmation bias, albeit not by those terms. John Stuart Mill is just one example.
A friend asked a question about whether there is research on whether some people are more receptive to some communication styles and more resistant to others.
And there short answer is: a lot. There are scholars working on that question in advertising, political communication, health communication, political psychology, social psychology, argumentation, cognitive psychology, logic, interpersonal communication. Hell, Aristotle makes claims about what styles are more appropriate for various audiences (and rhetors).
These different scholars don’t all come to the same conclusions, and that’s interesting. My crank theory is that it isn’t because one group is more scientific than another, but because it depends upon whether we’re thinking about persuasion as a rhetor (Chester) who is trying to get someone (Hubert) to believe something new or change his mind on something (“compliance-gaining”), Hubert is looking at a lot of data and trying to figure out what to make of it (“reasoning” or “self-persuasion”), Chester is trying to strengthen Huber’s commitment to a belief, group, policy agenda (“confirmation”) so much so that Hubert might be willing to engage in actions more aggressive or extreme than before (“mobilizing” or “radicalizing”), Hubert and Chester together are trying to figure out the best course of action (“deliberating”).
Because of how research tends to work, people usually examine or set up (in the case of lab research) scenarios that looks at only one of those kinds of persuasion. Of course, in the wild, it’s all of them, sometimes fairly mixed up. So, the research doesn’t always apply neatly to how persuasion actually works (or doesn’t).
A lot of the research doesn’t pose the question the way my friend did—they draw conclusions about ways that people are persuaded, rather than beginning with the reasonable hypothesis that individuals don’t all respond the same way, and that people might have styles of reasoning that would make them more or less receptive to styles of communication. Still and all, some of that work turns up interesting data, such as that people tend to prefer teleological explanations of historical or physical events/phenomena. (We don’t like chance.) (Right now I’m working on the rhetoric of counterfactuals, and there’s some interesting work about that—it also turns up in scholarship on why people keep trying to make evolution into a teleological process.)
It’s common for people to cite studies that conclude that people aren’t persuaded by studies.
Think about that. People who are persuaded that people aren’t persuaded by studies cite studies to others to show they’re right. That’s a performative contradiction.
I think that contradiction happens because we know that people aren’t necessarily persuaded to change their mind about X by having a study (or set of studies) cited at them, but we also know that having studies cited might be a set of datapoints on one side of a scale. Persuasion on big issues happens slowly and cumulatively. People who’ve changed their minds on big issues often describe a long process, with a variety of kinds of data—studies, logic, personal experience, narratives (fiction or film), in-group shifts. Kenneth Burke long ago pointed out that repetition is an important method of persuasion—even repetition of an outright lie or logically indefensible claim (he was talking about Hitler). Repetition as persuasion is a basis of much (most?) advertising.
I think some of the most useful work on persuasion is in the work on cognitive biases. People who are prone to binary thinking are more likely to be persuaded by arguments that can be presented as a binary; people drawn to cognitive closure like arguments that deny uncertainty or complexity. (When frightened, most everyone likes simple binaries—that’s a Trish crank theory.)
In addition to binary thinking, I think a few other really important biases are: confirmation bias, in-group favoritism, and naïve realism.
Confirmation bias is pretty much what it says on the label. People are more likely to believe something that confirms what they already believe. We will hold studies, arguments, claims, and so on to different standards: lower standards of proof/logic for what confirms what we already believe, and higher standards for something we don’t believe. That isn’t necessarily a terrible way to go through life—Kahneman (who did a lot of the great work on cognitive biases) argued that we probably should do that for most of getting through the day. But, on important issues, we need to find ways to minimize that bias.
Confirmation bias also works at a slightly more abstract level—we are more likely to believe a narrative, explanation, judgment, cause-effect argument, and so on if it confirms a pattern we believe is how the world works. If, for instance, we are authoritarians, then we’re more likely to be persuaded by an argument that presumes or advocates authoritarianism.
The just world model is another example of that process. People who believe that everyone gets what they deserve are more likely to believe that a victim of a crime, accident, or disease did something to cause that crime, accident, or disease.
You can see how the just world model causes people to place blame on the reddit sub r/mildlybaddrivers all the time—it’s kind of funny the extent to which some people will strive to place blame on the victim. The more that we’re uncomfortable with the possibility that bad things can happen to people who’ve done nothing wrong—the more that we want to believe in a world we can control—the more we are drawn to a narrative that shows the accidents could have been prevented. We want to believe that accidents wouldn’t have happened to us.
It’s all about us.
In-group favoritism is well described here. Basically, we have a tendency to believe that the in-group (the group we’re in) is better than other groups, and therefore entitled to better treatment and more resources, the benefit of the doubt in conflicts, forgiveness (whereas out-group members should be punished for the same behavior), and just generally lower standards. We don’t see them as lower standards—we think “fairness” means better treatment for us and people like us. So, we’re more likely to be persuaded by narratives, arguments, explanations, and so on that favor our in-group. We’re likely to dismiss criticism of the in-group or in-group members as “biased.” We are likely to hold in-group rhetors and leaders to low (or no) standards of proof and reasonableness, especially if we’re in a charismatic leadership relationship with them.
The third, and related, bias that’s important for style of thinking and style of persuasion is “naïve realism.” “Naïve realism” refers to the belief that the world is exactly and completely as it appears to me. If you’re a binary thinker, then it would seem to be right, because you believe the only other possibility is that there is no reality at all. That’s like saying that this animal must be a cat because otherwise there are no categories of animals. We spend most of our day operating on the basis of naïve realism—that the world is as it looks—as we should. But, there are times we have to be open to the idea that the world looks different to others because they’re looking at it from a different perspective, that there are parts of the world we can’t see, and that we might even be misled by our own biases. We might be wrong.
You can see how someone who believed that they see the world without biases (not possible, by the way) would only pay attention to rhetors, information, narratives that confirm what they already believe.
All these things make being open to reasonable persuasion actively scary; we’re “open” to persuasion only if it fits what we already believe. So does authoritarianism, but that’s a different post.