When and how have you been persuaded on a big issue?

Great Dane mix (Chester) with the red ball

This is a question I used to ask my students, and only now realized I should ask FB friends. What’s a major political issue/narrative/belief/commitment on which you changed your mind, and what made you change your mind?

For me, there are so very many, and I’ll mention one. For reasons too complicated to explain, I ended up being the person sent with a dog to a dog training class. I was 12? It was all the (literally Nazi) dog training method of tricking a dog into behaving badly and then punishing it by yanking on the choke collar.

About 25 years later, I got two dogs, and read all sorts of studies and books and took classes. This was a moment in my life when I was seriously considering leaving academia and either becoming a dog trainer or a lawyer.

Being an academic, I researched the issue. Except for Ian Dunbar, there was almost no actual research on the issue of what dog training works. The dominant advice was still “you must dominate your dog.” I had a Malamute/Lab and a Dane/Shepherd mix and the dominance method only sort of sometimes worked with the Malamute/Lab (if you squinted), and didn’t work at all with the Dane/Shepherd. It was disastrous with him (Chester, for those of you who’ve known me for a while). Ian Dunbar’s advice worked with both, as did Vicki Hearne’s advice. Dunbar and Hearne were oriented toward getting your dog (or horse, in the case of Hearne) to do the right thing and then rewarding them.  

Even the most “dominate your dog” rhetoric advised that you give your dog a job, and that was great advice–the only useful part of that whole approach.

So, I changed my mind on the whole “you must dominate your dog” approach, but not because I read one study, or had one conversation; it was because of a lot of things. The most important was that I cared enough about my dogs that I was willing to fling my theory of dog obedience out the window if it didn’t seem to be working for the dogs in front of me.

Only after my personal experience made me dubious did I look more carefully at the arguments and evidence for the dominance model. While that argument was familiar to me, and initially seemed normal, the more I looked at it, the more it was clear that they hadn’t actually done the kind of “research” that would have gotten an honorable mention in a 6th grade science fair.

Ian Dunbar’s advice was grounded in far better research than any of the alpha dog bullshit, although it was still just observational.

(In case you’re wondering, the whole alpha male thing is bullshit, although there is a good argument for a more “leadership” model.)

I mentioned I asked students about times that they changed their minds on a big issue (they didn’t have to tell me what the issue was, or narrate the process in any detail), and I generally got a similarly complicated narrative about a long process involving some studies, personal experience, noticing the flaws in in-group arguments. Sometimes it was a very dramatic life event, and sometimes a particularly good book or documentary.

I have said before, I think that we’re at a point when we need to persuade people who aren’t alarmed about what’s happening in a one-to-one way. I’m not sure how to do that. But I think it might be useful to think about how we were persuaded on big issues. (And, if you know me, you know that dog training is a big issue for me).

So, I think it might be helpful if we shared conversion narratives. Either yours, or references to famous ones.

If you don’t want your FB id (or name) associated with it, DM or email me, and I’ll post it without identifying information.

My hope is that we can come up with a better model of persuasion than what we get from psych studies or focus groups.


Some thoughts on persuasion

train wreck

A friend asked a question about whether there is research on whether some people are more receptive to some communication styles and more resistant to others.

And there short answer is: a lot. There are scholars working on that question in advertising, political communication, health communication, political psychology, social psychology, argumentation, cognitive psychology, logic, interpersonal communication. Hell, Aristotle makes claims about what styles are more appropriate for various audiences (and rhetors).

These different scholars don’t all come to the same conclusions, and that’s interesting. My crank theory is that it isn’t because one group is more scientific than another, but because it depends upon whether we’re thinking about persuasion as a rhetor (Chester) who is trying to get someone (Hubert) to believe something new or change his mind on something (“compliance-gaining”), Hubert is looking at a lot of data and trying to figure out what to make of it (“reasoning” or “self-persuasion”), Chester is trying to strengthen Huber’s commitment to a belief, group, policy agenda (“confirmation”) so much so that Hubert might be willing to engage in actions more aggressive or extreme than before (“mobilizing” or “radicalizing”), Hubert and Chester together are trying to figure out the best course of action (“deliberating”).

Because of how research tends to work, people usually examine or set up (in the case of lab research) scenarios that looks at only one of those kinds of persuasion. Of course, in the wild, it’s all of them, sometimes fairly mixed up. So, the research doesn’t always apply neatly to how persuasion actually works (or doesn’t).

A lot of the research doesn’t pose the question the way my friend did—they draw conclusions about ways that people are persuaded, rather than beginning with the reasonable hypothesis that individuals don’t all respond the same way, and that people might have styles of reasoning that would make them more or less receptive to styles of communication. Still and all, some of that work turns up interesting data, such as that people tend to prefer teleological explanations of historical or physical events/phenomena. (We don’t like chance.) (Right now I’m working on the rhetoric of counterfactuals, and there’s some interesting work about that—it also turns up in scholarship on why people keep trying to make evolution into a teleological process.)

It’s common for people to cite studies that conclude that people aren’t persuaded by studies.

Think about that. People who are persuaded that people aren’t persuaded by studies cite studies to others to show they’re right. That’s a performative contradiction.

I think that contradiction happens because we know that people aren’t necessarily persuaded to change their mind about X by having a study (or set of studies) cited at them, but we also know that having studies cited might be a set of datapoints on one side of a scale. Persuasion on big issues happens slowly and cumulatively. People who’ve changed their minds on big issues often describe a long process, with a variety of kinds of data—studies, logic, personal experience, narratives (fiction or film), in-group shifts. Kenneth Burke long ago pointed out that repetition is an important method of persuasion—even repetition of an outright lie or logically indefensible claim (he was talking about Hitler). Repetition as persuasion is a basis of much (most?) advertising.

I think some of the most useful work on persuasion is in the work on cognitive biases. People who are prone to binary thinking are more likely to be persuaded by arguments that can be presented as a binary; people drawn to cognitive closure like arguments that deny uncertainty or complexity. (When frightened, most everyone likes simple binaries—that’s a Trish crank theory.)

In addition to binary thinking, I think a few other really important biases are: confirmation bias, in-group favoritism, and naïve realism.

Confirmation bias is pretty much what it says on the label. People are more likely to believe something that confirms what they already believe. We will hold studies, arguments, claims, and so on to different standards: lower standards of proof/logic for what confirms what we already believe, and higher standards for something we don’t believe. That isn’t necessarily a terrible way to go through life—Kahneman (who did a lot of the great work on cognitive biases) argued that we probably should do that for most of getting through the day. But, on important issues, we need to find ways to minimize that bias.

Confirmation bias also works at a slightly more abstract level—we are more likely to believe a narrative, explanation, judgment, cause-effect argument, and so on if it confirms a pattern we believe is how the world works. If, for instance, we are authoritarians, then we’re more likely to be persuaded by an argument that presumes or advocates authoritarianism.

The just world model is another example of that process. People who believe that everyone gets what they deserve are more likely to believe that a victim of a crime, accident, or disease did something to cause that crime, accident, or disease.

You can see how the just world model causes people to place blame on the reddit sub r/mildlybaddrivers all the time—it’s kind of funny the extent to which some people will strive to place blame on the victim. The more that we’re uncomfortable with the possibility that bad things can happen to people who’ve done nothing wrong—the more that we want to believe in a world we can control—the more we are drawn to a narrative that shows the accidents could have been prevented. We want to believe that accidents wouldn’t have happened to us.

It’s all about us.

In-group favoritism is well described here. Basically, we have a tendency to believe that the in-group (the group we’re in) is better than other groups, and therefore entitled to better treatment and more resources, the benefit of the doubt in conflicts, forgiveness (whereas out-group members should be punished for the same behavior), and just generally lower standards. We don’t see them as lower standards—we think “fairness” means better treatment for us and people like us. So, we’re more likely to be persuaded by narratives, arguments, explanations, and so on that favor our in-group. We’re likely to dismiss criticism of the in-group or in-group members as “biased.” We are likely to hold in-group rhetors and leaders to low (or no) standards of proof and reasonableness, especially if we’re in a charismatic leadership relationship with them.

The third, and related, bias that’s important for style of thinking and style of persuasion is “naïve realism.” “Naïve realism” refers to the belief that the world is exactly and completely as it appears to me. If you’re a binary thinker, then it would seem to be right, because you believe the only other possibility is that there is no reality at all. That’s like saying that this animal must be a cat because otherwise there are no categories of animals. We spend most of our day operating on the basis of naïve realism—that the world is as it looks—as we should. But, there are times we have to be open to the idea that the world looks different to others because they’re looking at it from a different perspective, that there are parts of the world we can’t see, and that we might even be misled by our own biases. We might be wrong.

You can see how someone who believed that they see the world without biases (not possible, by the way) would only pay attention to rhetors, information, narratives that confirm what they already believe.

All these things make being open to reasonable persuasion actively scary; we’re “open” to persuasion only if it fits what we already believe. So does authoritarianism, but that’s a different post.

How persuasion happens

train wreck

Some time in the 1980s, my father said that he had always been opposed to the Vietnam War. My brother asked, appropriately enough, “Then who the hell was that man in our house in the 60s?”

That story is a little gem of how persuasion happens, and how people deny it.

I have a friend who was raised in a fundagelical world, who has changed zir mind on the question of religion, and who cites various studies to say that people aren’t persuaded by studies. That’s interesting.

For reasons I can’t explain, far too much research about persuasion involves giving people who are strongly committed to a point of view new information and then concluding that they’re idiots for not changing their minds. They would be idiots for changing their mind because they’re given new information while in a lab. They would be idiots for changing their mind because they get one source that tells them that they’re wrong.

We change our minds, but, at least on big issues, it happens slowly, due to a lot of factors, and we often don’t notice because we forget what we once believed.

Many years ago, I started asking students about times they had changed their minds. Slightly fewer many years ago, I stopped asking because I got the same answers over and over. And what my students told me was much like what books like Leaving the Fold, books by and about people who have left cults, changed their minds about Hell or creationism, and various friends said. They rarely described an instance when they changed their mind on an important issue because they were given one fact or one argument. Often, they dug in under those circumstances—temporarily.

But we do change our minds, and there are lots of ways that happens, and the best of them are about a long, slow process of recognition that a belief is unsustainable.[1] Rob Schenck’s Costly Grace reads much like memoirs of people who left cults, or who changed their minds about evolution or Hell. They heard the counterarguments for years, and dismissed them for years, but, at some point, maintaining faith in creationism, the cult, the leader of the cult, just took too much work.

But why that moment? I think that people change their minds in different ways partially because our commitments come from different passions.

In another post I wrote about how some people are Followers. They want to be part of a group that is winning all the time (or, paradoxically, that is victimized). They will stop being part of that group when it fails to satisfy that need for totalized belonging, or when they can no longer maintain the narrative that their group is pounding on Goliath. At that point, they’ll suddenly forget that they were ever part of the group (or claim that, in their hearts, they always dissented, something Arendt noted about many Germans after Hitler was defeated).

Some people are passionate about their ideology, and are relentless at proving everyone else wrong by showing, deductively, that those people are wrong. They do so by arguing from their own premises and then cherry-picking data to support that ideology. They deflect (generally through various attempts at stasis shift) if you point out that their beliefs are non-falsifiable. These are the people that Philip Tetlock described as hedgehogs. Not only are hedgehogs wrong a lot—they don’t do better than a monkey throwing darts—but they don’t remember being wrong because they misremember their original predictions. The consequence is that they can’t learn from their mistakes.

Some people have created a career or public identity about advocating a particular faction, ideology, product, and are passionate about defending every step into charlatanism they take in the course of defending that cult, faction, ideology. Interestingly enough, it’s often these people who do end up changing their minds, and what they describe is a kind of “straw that breaks the camel’s back” situation. People who leave cults often describe a sudden moment when they say, “I just can’t do this.” And then they see all the things that led up to that moment. A collection of memoirs of people who abandoned creationism has several that specifically mention discovering the large overlap in DNA between humans and primates as the data that pushed them over the edge. But, again, that data was the final push–it wasn’t the only one.

Some people are passionate about politics, and about various political goals (theocracy, democratic socialism, libertarianism, neoliberalism, anarchy, third-way neoliberalism, originalism) and are willing to compromise to achieve the goals of their political ideology. In my experience, people like this are relatively open to new information about means, and so they look as though they’re much more open to persuasion, but even they won’t abandon a long-time commitment because of one argument or one piece of data—they too shift position only after a lot of data.

At this point, I think that supporting Trump is in the first and third category. There is plenty of evidence that he is mentally unstable, thin-skinned, corrupt, unethical, vindictive, racist, authoritarian, dishonest, and even dangerous. There really isn’t a deductive argument to make for him, since he doesn’t have a consistent commitment to (or expression of) any economic, political, or judicial theory, and he certainly doesn’t have a principled commitment to any particular religious view. It’s all about what helps him in the moment, in terms of his ego and wealth. That’s why defenders of his keep getting their defenses entangled, and end up engaging in kettle logic. (I never borrowed your kettle, it had a whole in it when I borrowed it, and it was fine when I returned it.)

The consequence of Trump’s pure narcissism (and mental instability) and lack of principled commitment to any consistent ideology is that Trump regularly contradicts himself, as well as talking points his supporters have been loyally repeating, abandons policies they’ve been passionately advocating on his behalf, and leaves them defending statements that are nearly indefensible. What a lot of Trump critics might not realize is that Trump keeps leaving his loyal supporters looking stupid, fanatical, gullible, or some combination of all three. He isn’t even giving them good talking points, and many of the defenses and deflections are embarrassing.

For a long time, I was hesitant to shame them, since an important part of the pro-GOP rhetoric is that “libruls” look down on regular people like them. I was worried that expressing contempt for the embarrassingly bad (internally contradictory, incoherent, counterfactual, revisionist) talking points would reinforce that talking point. And I think that’s a judgment that people have to make on an individual basis, to the extent that they are talking about Trump with people they know well—should they avoid coming across as contemptuous?

But for strangers, I think that shaming can work because it brings to the forefront that Trump is setting his followers up to be embarrassed. That means he is, if not actually failing, at least not fully succeeding at what a leader is supposed to do for his followers. The whole point in being a loyal follower is that the leader rewards that loyalty. The follower gets honor and success by proxy, by being a member of a group that is crushing it. That success by proxy comes from Trump’s continual success, his stigginit to the libs, and his giving them rhetorical tactics that will make “libs” look dumb. Instead, he’s making them look dumb. So, pointing out that their loyal repetition of pro-Trump talking points is making them look foolish is putting more straw on that camel’s back.

Supporting Trump, I’m saying, is at this point largely a question of loyalty. Pointing out that their loyalty is neither returned nor rewarded is the strategy that I think will eventually work. But it will take a lot of repetition.



[1] Conversions to cults, otoh, involve a sudden embrace of this cult’s narrative, one that erases all ambiguity and uncertainty.

Trump and the long con

One of the paradoxes of con artists is that cons always depend on appealing to the mark’s desire for a quick and easy solution but the most profitable cons last a long time. How do you keep people engaged in the scam if you’re siphoning off their money?  

There are several ways, but one of the most common is to ensure that they’re getting a quick outcome that they like. They’ll often wine and dine their marks, thereby coming across as too successful to need the mark’s money, and also increasing the mark’s confidence (and attachment). They might be supporting that high living through bad checks, but more often with credit cards and money from previous marks, or by getting the mark to pay for the high living without knowing. One serial confidence artist who specialized in picking up divorced middle-aged women on the Internet was particularly adept at stealing a rarely-used credit card from the women while they were showering. He then simply hid the bills when they arrived.

Because he seemed to have so much money, the women assumed he wouldn’t be scamming them, and would then hand over their life’s savings for him to invest.

They do this despite there being all sorts of good signs that the guy is a con artist–his life story seems a little odd, he doesn’t seem to have a lot of friends who’ve known him very long, there’s always some reason he can’t write checks (or own a home or sign a loan). There are three reasons that the con works, and that people ignore the counter-evidence.

First, cons flatter their marks, arguing that the marks deserve so much more than they’re getting, and persuade the marks to have confidence in them. They will tell the marks that those people (the ones who are pointing to the disconfirming data) look down on them, think they’re stupid, and think they know better. The con thereby gets the mark’s ego associated with his being a good person and not a con artist—admitting that he is a con means the mark will have to admit that those people were right.[1] The con artist will spin the evidence in ways that show he’s willing to admit to some minor flaws, ones that make the mark feel that she can really see through him. She knows him.[2]

Second, the con works because we don’t like ambiguity, and we tend to privilege direct experience and our own perception. The reasons to wonder about whether a man really is that wealthy are ambiguous, and it’s second order thinking (thinking about what isn’t there, about the absence of friends, family, connections, bank statements). That ambiguous data will seem less vivid, less salient, less compelling than the direct experience we have of his buying us expensive gifts. The family thing is vague and complicated; the jewelry is something we can touch.

Third, people who dislike complexity, who believe that most things have simple solutions, and that they are good at seeing those simple solutions are easy marks because those are precisely the beliefs to which cons appeal. Admitting that the guy is a con artist means admitting that the mark’s whole view of life—that the world has simple solutions, that people are what they seem to be, that you can trust your gut about whether someone is good or bad, that things you can touch (like jewelry) matter.

And it works because the marks don’t realize that they are the ones who’ve actually paid for that jewelry.

There are all the signs of his being a con artist—all the lawsuits, all the lies, the lack of transparency about his actual wealth, the reports that show a long history of dodgy (if not actively criminal) tax practices, the evidence that shows his wealth was inherited and not earned—but those are complicated to think about. Trump tells people that he cares about them; he (and his supportive media) tell their marks that all the substantive criticism is made by libruls who look down on them, who think they know better. The media admits to a few flaws, and spins them as minor.

Trump is a con artist, and his election was part of a con game about improving his brand. But, once he won the election, he had to shift to a different con game, one that involved getting as much money for him and his corporations as possible, reducing accountability for con artists, holding off investigations into his financial and campaign dealings, and skimming.

 And Trump gives his marks jewelry. If you have Trump supporters in your informational world, then you know that they respond to any criticism of Trump with, “I don’t care about collusion; I care about my lower taxes.” (Or “I care about the economy” or “I care that someone is finally doing something about illegal immigrants.”) They have been primed to frame concerns about Trump as complicated, ambiguous, and more or less personal opinion, but the benefit of Trump (to them) as clear, unambiguous, and tangible.

 They can touch the jewelry.

And they don’t realize that he isn’t paying for it; he never paid for it, and he never will. They’re paying for it. They bought themselves that jewelry.

There are, loosely, three ways to try to get people to see the con. First, I think it’s useful not to come across as saying that people are stupid for falling for Trump’s cons (although it can be useful to point out that current defenses of Trump are that he’s too stupid to have violated the law). It can be helpful to say that you understand why he and his policies would seem so attractive, but point out that he’s greatly increased the deficit (that his kind of tax cuts always increase the deficit). It’s helpful to have on hand the data about how much “entitlement”programs cost. Point out that they will be paying for his tax cuts for a long, long time.

Another strategy is to refuse to engage and just keep piling on the evidence. People get persuaded that they’ve been taken in by a con artist incident by incident. It isn’t any particular one, but that there are so many, and they reject each one as it comes along. So, I think that sharing story after story about how corrupt Trump is, how bad his policies are,and what damage he is doing—even if (especially if) people complain about your doing so—is effective in the long run.

Third, when people object or defend Trump, ask them if they’re getting their information from sources that would tell them if Trump were a con artist. They’ll respond with, “Oh, so I should watch MSNBC” (or something along those lines) and the answer is: “Yes, you should watch that too.” Or, “No, you shouldn’t get your news from TV.” Or a variety of other answers, but the point is that you aren’t telling them to switch to “librul” sources as much as getting more varied information. 

Con artists create a bond with their marks—their stock in trade is creating confidence. They lose power when their marks lose confidence, and that happens bit by bit. And sometimes it happens when people notice the jewelry is pretty shitty, actually.


[1]This is why it’s so common for marks to start covering for the con when the con gets exposed. They fear the “I told you so” more than the consequences of getting conned.

[2] In other words, con artists try to separate people from the sources of information that would undermine the confidence the mark has in the con.

Rough draft of the intro for the Hitler and Rhetoric book

[Much of this is elsewhere on this blog. I’m curious if I’m still having the problem of being too heady and academic.]

Martin Niemoller was a Lutheran pastor who spent 1938-1945 in concentration camps as the personal prisoner of Adolf Hitler. Yet, Neimoller had once been a vocal supporter of Hitler, who believed that Hitler would best enact the conservative nationalist politics that he and Niemoller shared. Niemoller was a little worried about whether Hitler would support the churches as much as Niemoller wanted (under the Democratic Socialists, the power of the Lutheran and Catholic churches had been weakened, as the SD believed in a separation of church and state), but Neimoller thought he could outwit Hitler, get the conservative social agenda he wanted, disempower the socialists, and all without harm coming to the church. He was wrong.

After the war, Niemoller famously said about his experience:

First they came for the Socialists, and I did not speak out—
Because I was not a Socialist.

Then they came for the Trade Unionists, and I did not speak out—
Because I was not a Trade Unionist.

Then they came for the Jews, and I did not speak out—
Because I was not a Jew.

Then they came for me—and there was no one left to speak for me.[1]

Niemoller was persuaded that Hitler would be a good leader, or, at least, better than the Socialists. After the war, Niemoller was persuaded that his support for Hitler had been a mistake. What persuaded him either time?

Christopher Browning studied the Reserve Police Battalion 101 and its role in Nazi genocide, narrating how a group of ordinary men could move from being appalled at the killing of unarmed noncombatants to doing so effectively, calculatedly, and enthusiastically. German generals held captive by the British were wiretapped, and often talked about how and why they supported Hitler, many of whom had been opposed to him. In 1950, Milton Mayer went to visit the small German town from which his family had emigrated and talked to the people living there, writing a book about his conversations with ten of them, all of whom to some degree justified not only their actions during the Nazi regime, but the regime itself—even those who had at points or in ways resisted it. Melita Maschmann’s autobiographical Account Rendered, published in 1963, describes how she reconciled her Hitler Youth activities, which included confiscating property and helping to send people to camps, with her sense that National Socialism was idealistic and good. Robert Citino’s The Wehrmacht Retreats, David Stone’s Shattered Genius, and Ian Kershaw’s The End all describe how so many members of the German military elite not only reconciled themselves to working for Hitler, but to following orders that they believed (often correctly) meant disaster and defeat. Benny Morris’ Roots of Appeasement gives a depressing number of examples of major figures and media outlets that persuaded others and were persuaded themselves that Hitler was a rational, reasonable, peace-loving political figure whose intermittent eliminationist and expansionist rhetoric could be dismissed. Andrew Nagorski’s Hitlerland similarly describes American figures who were persuaded that Hitler wouldn’t start another war; accounts of the 1936 Olymplic Games, hosted by the Nazis, emphasize that Nazi efforts were successful, and most visitors went away believing that accounts of anti-Jewish violence and discrimination were overstated. Biographers of Hitler all have discussions of his great rhetorical successes at various moments, enthusiastic crowds, listeners converted to followers, and individuals who walked out of meetings with him completely won over. Soldiers freezing to death in a Russian winter wrote home about how they still had faith in Hitler’s ability to save them; pastors and priests who believed that they were fighting to prevent the extermination of Christianity from Germany still preached faith in Hitler, blaming his bad advisors; ordinary Germans facing the corruption and sadism of the Nazi government and the life-threatening consequences of Hitler’s policies similarly protection their commitment to Hitler and bemoaned the “little Hitlers” below him who were, they said, the source of the problems. The atrocities of Nazism required active participation, support, and at least acquiescence on the part of the majority of Germans—the people shooting, arresting, boycotting, humiliating, and betraying victims of Nazism were not some tiny portion of the population, and those actions required that large numbers walk by. Some people were persuaded to do those things, and some people were persuaded to walk past.

After the war, what stunned the world was that Germans had been persuaded to acts of irrationality and cruelty previously unimaginable. Understanding what happened in Germany requires understanding persuasion. And understanding persuasion means not thinking of it as a speaker who casts a spell over an audience and immediately persuades them to be entirely different. Rhetoric, which Aristotle defined as the art of finding the available means of persuasion, isn’t just about what a rhetor (a speaker or author) consciously decides to do to manipulate a passive audience. What the case of Hitler shows very clearly is that we are persuaded by many things, not all of them words spoken by a person consciously trying to change our beliefs. Rhetoric helps us understand our own experience, and the most powerful kind of persuasion is self-persuasion. What a rhetor like Hitler does is give us what scholars of rhetoric call “topoi” (essentially talking points) and strategies such that we feel comfortable and perhaps deeply convinced that a course of action is or was the right one. Rhetoric is about justification as much as motivation. That isn’t how people normally think about persuasion and rhetoric, and, paradoxically, that’s why we don’t see when we’ve been persuaded of a bad argument—because we’re wrong about how persuasion works.

This book is about Hitler, and yet not about Hitler. It’s really about persuasion, and why we shouldn’t imagine persuasion as a magically-gifted speaker who seduces people into new beliefs and actions they will regret in the morning. It’s never just one speaker, it’s never just speech, it’s never even just discourse, the beliefs and actions aren’t necessarily very new, and people don’t always really regret them in the morning.

[1] There are various versions. This one is from here: https://www.ushmm.org/wlc/en/article.php?ModuleId=10007392

Conditions that make persuasion difficult

A lot of people cite studies that show that people can’t be persuaded. As though that should persuade people not to try to persuade others.

That isn’t even the biggest problem with those studies. The studies are often badly designed (no one should be persuaded to change an important belief by being told by one person in a psych experiment that they’re wrong). And the studies aren’t generally designed to keep in mind what the research on persuasion does show–that some conditions make it more difficult to persuade people.

I was going to put together a short handout for students about why the paper they’re writing is so hard (an ethical intervention in one of several possible situations, ranging from arguing against the Sicilian Expedition to arguing for retreating from Stalingrad), and ended up writing up a list of the biggest obstacles.

An opposition (i.e., already come to a decision) audience that has:

    • Taken the stance in public (especially if s/he has taken credit for it being a good idea or otherwise explicitly attached her/his ego/worth to the position);
    • Suffered for the position, had someone loved suffer, or caused others to suffer (e.g., voted for a policy that caused anyone to be injured)
    • Equated the idea/position with core beliefs of his/her culture, religion, political party, or ideology (since disagreement necessarily becomes disloyalty);
    • Been persuaded to adopt the position out of fear (especially for existence of the ingroup) or hatred for an outgroup;
    • Is committed to authoritarianism and/or naïve realism (equates changing one’s mind with weakness, illness, sin, or impaired masculinity; is actively frightened/angered by assertions of uncertainty or situations that require complex cognitive processes);
    • Does not value argumentative “fairness” (insists upon a rhetorical “state of exception” or “entitlement”—aka “double standard”—for his/her ingroup);
    • Has a logically closed system (cannot articulate the conditions under which s/he would change her/his mind).

A culture that

    • Demonizes or pathologizes disagreement (an “irenic” culture);
    • Is an honor culture (what matters is what people say about you, not what is actually true, so you aren’t “wrong” till you admit it);
    • Equates refusing to change your mind with privileged values (being “strong,” “knowing your mind,” masculinity) and“changing your mind” with marginalized values (being “weak,” “indecisive,” or impaired masculinity);
    • Enhances some group’s claim to rhetorical entitlement (doesn’t insist that the rules of argumentation be applied the same across groups or individuals);
    • Has standards of “expertise” that are themselves not up for argument;
    • Promotes a fear of change;
    • Equates anger and a privileged epistemological stance.

A topic

    • That results from disagreement over deep premises;
    • About which there is not agreement over standards of evidence;
    • That makes people frightened (especially about threats from an outgroup);
    • That is complicated and ambiguous;
    • That is polarized or controversial, such that people will assume (or incorrectly) infer your affirmative position purely on the basis of any negative case you make (e.g., If you disagree with the proposition that “Big dogs make great pets because they require no training” on the grounds that they do require training, your interlocutor will incorrectly assume that you think [and are arguing] that big dogs do not make great pets);
    • That is easily framed as a binary choice between option A (short-term rewards [even if higher long-term costs] or delayed costs [even if much higher]) and option B (delayed rewards [even if much higher] or short-term costs [even if much lower than the long-term costs of option A]).

cropped-temp.png