When and how have you been persuaded on a big issue?

Great Dane mix (Chester) with the red ball

This is a question I used to ask my students, and only now realized I should ask FB friends. What’s a major political issue/narrative/belief/commitment on which you changed your mind, and what made you change your mind?

For me, there are so very many, and I’ll mention one. For reasons too complicated to explain, I ended up being the person sent with a dog to a dog training class. I was 12? It was all the (literally Nazi) dog training method of tricking a dog into behaving badly and then punishing it by yanking on the choke collar.

About 25 years later, I got two dogs, and read all sorts of studies and books and took classes. This was a moment in my life when I was seriously considering leaving academia and either becoming a dog trainer or a lawyer.

Being an academic, I researched the issue. Except for Ian Dunbar, there was almost no actual research on the issue of what dog training works. The dominant advice was still “you must dominate your dog.” I had a Malamute/Lab and a Dane/Shepherd mix and the dominance method only sort of sometimes worked with the Malamute/Lab (if you squinted), and didn’t work at all with the Dane/Shepherd. It was disastrous with him (Chester, for those of you who’ve known me for a while). Ian Dunbar’s advice worked with both, as did Vicki Hearne’s advice. Dunbar and Hearne were oriented toward getting your dog (or horse, in the case of Hearne) to do the right thing and then rewarding them.  

Even the most “dominate your dog” rhetoric advised that you give your dog a job, and that was great advice–the only useful part of that whole approach.

So, I changed my mind on the whole “you must dominate your dog” approach, but not because I read one study, or had one conversation; it was because of a lot of things. The most important was that I cared enough about my dogs that I was willing to fling my theory of dog obedience out the window if it didn’t seem to be working for the dogs in front of me.

Only after my personal experience made me dubious did I look more carefully at the arguments and evidence for the dominance model. While that argument was familiar to me, and initially seemed normal, the more I looked at it, the more it was clear that they hadn’t actually done the kind of “research” that would have gotten an honorable mention in a 6th grade science fair.

Ian Dunbar’s advice was grounded in far better research than any of the alpha dog bullshit, although it was still just observational.

(In case you’re wondering, the whole alpha male thing is bullshit, although there is a good argument for a more “leadership” model.)

I mentioned I asked students about times that they changed their minds on a big issue (they didn’t have to tell me what the issue was, or narrate the process in any detail), and I generally got a similarly complicated narrative about a long process involving some studies, personal experience, noticing the flaws in in-group arguments. Sometimes it was a very dramatic life event, and sometimes a particularly good book or documentary.

I have said before, I think that we’re at a point when we need to persuade people who aren’t alarmed about what’s happening in a one-to-one way. I’m not sure how to do that. But I think it might be useful to think about how we were persuaded on big issues. (And, if you know me, you know that dog training is a big issue for me).

So, I think it might be helpful if we shared conversion narratives. Either yours, or references to famous ones.

If you don’t want your FB id (or name) associated with it, DM or email me, and I’ll post it without identifying information.

My hope is that we can come up with a better model of persuasion than what we get from psych studies or focus groups.


Reasonable policies can be reasonably advocated

Secretary of Defense Robert McNamara in front of a map of VN
Photo from here: https://www.nytimes.com/2009/07/07/us/07mcnamara.html

Why does having a “reasonable” argument matter?

Some people are claiming that the reason so many people are supporting a political figure they dislike is that our education system is bad. And it is, but not in the ways people think. Our problem has long been that we teach argument, but not argumentation. An argument is a claim with a supporting reason (what Aristotle called an enthymeme); it’s a thing you fling at someone with whom you disagree. It’s very effective for making a person feel confirmed in what they already believe, and therefore also useful for confirming the beliefs of in-group members (or moving them very slightly), but it doesn’t really do much for helping people deliberate together about complicated and controversial problems and policies.

The most popular argument textbook confirms (see what I did there?) the false binary of the rational/irrational split—that one’s position on an issue might be rational (i.e., logical) or emotional (i.e., illogical). That split is itself illogical, and very much an emotional response (the desire to feel that one is rational, and to feel that others are irrational). The false assumption is that a “rational” (aka, “unemotional”) stance on an issue is “unbiased.” I’m not advocating that understanding of reasonable deliberation–I think it’s unmitigated bullshit.

The irony is that this way of describing how people think is wrong, as is shown by so very, very many studies. It is, logically, indefensible (but it feeeeels so good to think of oneself as “rational,” as having a viewpoint that is obviously right and objectively true).

People are biased. Everyone is biased. All humans (and probably other animals) rely on cognitive biases when considering information and making a decision. That’s what the research shows. So, if you tell yourself that people who disagree with you are biased, and you aren’t, what you’re showing is that you’re so deep into confirmation bias and in-group favoritism that you are fifty years late to the party of what research on decision-making actually says. You’re too biased to admit that you’re biased.[1]

Argumentation is a set of strategies that tries to help people disagree productively with one another (not necessarily nicely, unemotionally, persuasively, or in ways that make everyone comfortable), but the strategies are ways of correcting for the biases to which we’re all prone. Argumentation is oriented toward productive and inclusive deliberation, and not just coercion or what one scholar of rhetoric called rhetrickery.

Argumentation requires that participants (usually called interlocutors, a term I like since it sounds as though people are locked together) follow these rules:
1) there is an agreement on the “stasis”—what the hell we’re arguing about. (This rule prevents deflection, and various fallacies like motivism, ad hominem, ad baculum.)
2) all the rules (of logic, civility, citation practices, and so on) apply equally to all parties. (This rule ensures that it is good faith argumentation, rather than just a wanking performance to the in-group or another form of ad baculum.)
3) interlocutors engage the smartest and best opposition arguments. (This rule prevents another kind of deflection, as well as bad faith posturing in front of the in-group.)
4) interlocutors cite their sources when asked to provide them, and, as said above, hold their and opposition sources to the same standards of credibility. (In other words, “this is a good source because it agrees with me, or is in-group,” is not good faith argumentation. It’s performatively admitting that you’re full of shit.)
5) Assertions are not evidence, let alone proof. They’re just assertions. That someone can find a source that asserts that bunnies are not fluffy is not evidence that bunnies are not fluffy; it’s evidence that someone has asserted it. (Were I Queen of the Universe this is a distinction everyone would have to understand before they finished middle school.)

Notice that following these rules wouldn’t lead to a pleasant, comfortable, conflict-free discussion, and that someone who insisted on these rules might be seen as a person creating conflict.

This next paragraph is very pedantic. I’ve spent over forty years studying how communities make very bad decisions when they had all the information they needed to make better ones, and this is a list of the approaches to policy disagreements that go badly. The short version is that they engaged in various methods of argument and not argumentation.

There are a lot of ways that people imagine the ideal way that a community might make a decision. One is that everyone would advocate for their preferred course of action without disagreeing with anyone else (expressivist); another is that people would try to make the best case possible for their preferred policy ignoring all norms of ethics and the one that won the most adherents was the best (sloppy social Darwinism applied to decision-making), another is providing all the data necessary for the public to make a reasonable decision (dreamy informationalism), another is for an elite to decide what is best and to give the public (or their audience) the information that will gain their compliance (rhetorical authoritarianism), another is to provide “both sides” of an argument to people and see what they decide (expressive deliberation, sometimes called by scholars agonism).

I was once an advocate of agonism, but then I looked at how advocates of slavery talked themselves into a lot of bad decisions, and realized that a public sphere in which opposing arguments were expressed don’t do shit in terms of helping communities make good decisions. It can, in fact, foster fanatical commitments, especially if the disagreement about policies is falsely reframed as a conflict among identities (e.g., pro- v. anti-slavery). And, really, every disagreement about an admitted problem that is framed as a conflict between two identities (or a continnum between the two extremes) is gerfucked.

And so I abandoned agonism in favor of argumentation.

It’s important that I’m not advocating unemotional public discourse (which is neither possible nor desirables—demonizing the expression of feelings is also a contributor to train wrecks, but that’s a different post). Reasonable and emotional are not in conflict; if anything, they’re necessarily connected.

One of the reasons is that I realized that the various policy advocates who advocated ultimately disastrous policies refused to follow the rules of argumentation. Sometimes they did so calmly, with lots of data and quantification (e.g., McNamara), sometimes they did so dramatically and hyperbolically (e.g., Hitler). Their style, platform, set of policies, personal merits, ethical standards and all sorts of other things might be very different, but what was shared was that they couldn’t argue for their policies following the rules of argumentation because their policies were bad. Their arguments were paper tigers, that looked fierce attacking even frailer paper oppositions, and so often felt compelling, but they were bad arguments in favor of bad policies.

And that’s the important point. If you have good policies, you can engage in good argumentation. If you can’t engage in good argumentation, it might be because you have bad policies. There might be all sorts of other reasons (access to resources, for instance).

It isn’t that every individual has to be able to put forward a reasonable argument that engages the smartest opposition for every decision they (we) make at every moment. It isn’t even that every individual who supports a particular policy has to engage in reasonable argumentation in favor of it. But someone should. If there is a major public policy being advocated and no one can advocate it using reasonable argumentation, then it’s a bad policy.




[1] I’m being generous by saying someone is only fifty years late. In fact, various philosophers have noted many of the biases, such as in-group favoritism and confirmation bias, albeit not by those terms. John Stuart Mill is just one example.

Some thoughts on persuasion

train wreck

A friend asked a question about whether there is research on whether some people are more receptive to some communication styles and more resistant to others.

And there short answer is: a lot. There are scholars working on that question in advertising, political communication, health communication, political psychology, social psychology, argumentation, cognitive psychology, logic, interpersonal communication. Hell, Aristotle makes claims about what styles are more appropriate for various audiences (and rhetors).

These different scholars don’t all come to the same conclusions, and that’s interesting. My crank theory is that it isn’t because one group is more scientific than another, but because it depends upon whether we’re thinking about persuasion as a rhetor (Chester) who is trying to get someone (Hubert) to believe something new or change his mind on something (“compliance-gaining”), Hubert is looking at a lot of data and trying to figure out what to make of it (“reasoning” or “self-persuasion”), Chester is trying to strengthen Huber’s commitment to a belief, group, policy agenda (“confirmation”) so much so that Hubert might be willing to engage in actions more aggressive or extreme than before (“mobilizing” or “radicalizing”), Hubert and Chester together are trying to figure out the best course of action (“deliberating”).

Because of how research tends to work, people usually examine or set up (in the case of lab research) scenarios that looks at only one of those kinds of persuasion. Of course, in the wild, it’s all of them, sometimes fairly mixed up. So, the research doesn’t always apply neatly to how persuasion actually works (or doesn’t).

A lot of the research doesn’t pose the question the way my friend did—they draw conclusions about ways that people are persuaded, rather than beginning with the reasonable hypothesis that individuals don’t all respond the same way, and that people might have styles of reasoning that would make them more or less receptive to styles of communication. Still and all, some of that work turns up interesting data, such as that people tend to prefer teleological explanations of historical or physical events/phenomena. (We don’t like chance.) (Right now I’m working on the rhetoric of counterfactuals, and there’s some interesting work about that—it also turns up in scholarship on why people keep trying to make evolution into a teleological process.)

It’s common for people to cite studies that conclude that people aren’t persuaded by studies.

Think about that. People who are persuaded that people aren’t persuaded by studies cite studies to others to show they’re right. That’s a performative contradiction.

I think that contradiction happens because we know that people aren’t necessarily persuaded to change their mind about X by having a study (or set of studies) cited at them, but we also know that having studies cited might be a set of datapoints on one side of a scale. Persuasion on big issues happens slowly and cumulatively. People who’ve changed their minds on big issues often describe a long process, with a variety of kinds of data—studies, logic, personal experience, narratives (fiction or film), in-group shifts. Kenneth Burke long ago pointed out that repetition is an important method of persuasion—even repetition of an outright lie or logically indefensible claim (he was talking about Hitler). Repetition as persuasion is a basis of much (most?) advertising.

I think some of the most useful work on persuasion is in the work on cognitive biases. People who are prone to binary thinking are more likely to be persuaded by arguments that can be presented as a binary; people drawn to cognitive closure like arguments that deny uncertainty or complexity. (When frightened, most everyone likes simple binaries—that’s a Trish crank theory.)

In addition to binary thinking, I think a few other really important biases are: confirmation bias, in-group favoritism, and naïve realism.

Confirmation bias is pretty much what it says on the label. People are more likely to believe something that confirms what they already believe. We will hold studies, arguments, claims, and so on to different standards: lower standards of proof/logic for what confirms what we already believe, and higher standards for something we don’t believe. That isn’t necessarily a terrible way to go through life—Kahneman (who did a lot of the great work on cognitive biases) argued that we probably should do that for most of getting through the day. But, on important issues, we need to find ways to minimize that bias.

Confirmation bias also works at a slightly more abstract level—we are more likely to believe a narrative, explanation, judgment, cause-effect argument, and so on if it confirms a pattern we believe is how the world works. If, for instance, we are authoritarians, then we’re more likely to be persuaded by an argument that presumes or advocates authoritarianism.

The just world model is another example of that process. People who believe that everyone gets what they deserve are more likely to believe that a victim of a crime, accident, or disease did something to cause that crime, accident, or disease.

You can see how the just world model causes people to place blame on the reddit sub r/mildlybaddrivers all the time—it’s kind of funny the extent to which some people will strive to place blame on the victim. The more that we’re uncomfortable with the possibility that bad things can happen to people who’ve done nothing wrong—the more that we want to believe in a world we can control—the more we are drawn to a narrative that shows the accidents could have been prevented. We want to believe that accidents wouldn’t have happened to us.

It’s all about us.

In-group favoritism is well described here. Basically, we have a tendency to believe that the in-group (the group we’re in) is better than other groups, and therefore entitled to better treatment and more resources, the benefit of the doubt in conflicts, forgiveness (whereas out-group members should be punished for the same behavior), and just generally lower standards. We don’t see them as lower standards—we think “fairness” means better treatment for us and people like us. So, we’re more likely to be persuaded by narratives, arguments, explanations, and so on that favor our in-group. We’re likely to dismiss criticism of the in-group or in-group members as “biased.” We are likely to hold in-group rhetors and leaders to low (or no) standards of proof and reasonableness, especially if we’re in a charismatic leadership relationship with them.

The third, and related, bias that’s important for style of thinking and style of persuasion is “naïve realism.” “Naïve realism” refers to the belief that the world is exactly and completely as it appears to me. If you’re a binary thinker, then it would seem to be right, because you believe the only other possibility is that there is no reality at all. That’s like saying that this animal must be a cat because otherwise there are no categories of animals. We spend most of our day operating on the basis of naïve realism—that the world is as it looks—as we should. But, there are times we have to be open to the idea that the world looks different to others because they’re looking at it from a different perspective, that there are parts of the world we can’t see, and that we might even be misled by our own biases. We might be wrong.

You can see how someone who believed that they see the world without biases (not possible, by the way) would only pay attention to rhetors, information, narratives that confirm what they already believe.

All these things make being open to reasonable persuasion actively scary; we’re “open” to persuasion only if it fits what we already believe. So does authoritarianism, but that’s a different post.

BSAB: “Both sides” and the slavery debate

cover of book on the slavery debate
https://www.uapress.ua.edu/9780817381257/fanatical-schemes/

As I’ve said many times, as soon as a public, media, or person frames our complicated world of policy options as either a binary or continuum of two sides, then it’s all about in- and out-groups, and our shared world of policy disagreements isn’t the kind of disagreement that can help communities come to pragmatic solutions. It’s some degree of demagoguery. Maybe it’s a horse race, maybe it’s a full-throated call for political or physical extermination. But it’s never useful for effective deliberation, about anything. Because there are never just two sides about any policy. And while one can describe our political situation as a binary or continuum, one can also rate all political figures on the basis of whether they agree with your narrow policy agenda. One can also arrange all candidates on the basis of how much they use the letter ‘E’ in their messaging. One can find a lot of ways of categorizing political figures and group commitments—that doesn’t mean those categories are useful ways to think about what policies are best for our shared world.

What framing our complicated world of policy options as a binary or continuum does is to fame is it as us v. them. And so we engage in motivism, the genus-species fallacy, and various forms of ad hominem.

Once political disagreements are framed as conflicts among various identities (either a binary or continuum), then we don’t deliberate together, and that is what is supposed to happen in a democracy. Democracy thrives for everyone when people try to work together to solve problems. They can argue, vehemently, petulantly, emotionally, but with each other.

And, really, this is something we all know to be true. The moment that a conflict in your church, family, workplace, garden club, D&D game, neighborhood mailing list, or whatever is framed as a conflict of two sides is the moment that people stop deliberating and start taking sides. They might still debate, but they aren’t deliberating. And the train is wobbling on the tracks.

Here’s an example of a time that binary/continuum was (and is) both false and poisonous: antebellum debates about slavery, and postbellum narratives about what happened. [If you want me to cite sources for everything I’m saying, go here. ]

There weren’t two sides to the debate about slavery, yet that’s how the issue is described, in everything from textbooks to popular understandings.

There were at least eleven.

1) Slavery should be expanded to all states, so that there should be no such thing as a non-slave state. In other words, they didn’t believe in states’ rights.

2) If you enslaved someone in a pro-slavery state, you should be able to take them into any state, and ignore whatever laws that state had about slavery. Again, a stance that made clear that it wasn’t about states’ rights.

[So, let’s stop pretending that slavers were pro- states’ rights. They didn’t recognize the right of a state to ban slavery. If you think I’m wrong, cite sources that show that pro-slavery rhetors thought states had the right to ban slavery. Good luck with that. Also Dred Scott. Also you’re saying that the people who called for secession were liars, since they said it was about slavery.]

3) Slavery should be allowed in current slaver states, and every additional state should be balanced in terms of slaver or not, so that anti-slavery states couldn’t have more than 50% of the Senate. (The 3/5th clause pretty much guaranteed them the House.) The electoral college also did (again, 3/5th clause), so this was not a compromise, but a pro-slavery policy, and a violation of states’ rights.

4) We should restrict slavery to current slaver states, and not let it expand.

5) Slavery will die out for economic reasons, and so there’s no reason to try to resist slavers’ actions.

6) Slavery will die out, and result in large numbers of ex-slaves, so we should “re-colonize” freed slaves (this persisted until the 20th century, since it’s essentially Theodore Bilbo’s argument).

7) We should institute a slow ban on slavery, giving slavers the opportunity to sell their enslaved people to areas where slavery was still legal. (This was done in many states).

8) We should ban slavery, and recompense slavers.

9) We should institute a slow ban slavery, recompense slavers, and return all freed slaves to Africa (not a party they were from; sometimes this proposal included second or third generation Americans).

10) We should ban slavery and not recompense slavers.

11) We should ban slavery, and fully integrate African Americans as we have other ethnicities.

Notice that five and six are not anti-slavery, but also not pro-slavery. I have trouble characterizing three or four as anti-slavery, since they were allowing slavery to continue. Pro-slavery rhetors treated those polices as anti-slavery because slavery as an economy was about buying and selling the enslaved people, so, i slavery didn’t expand, then there wouldn’t be a market, and then slavery wouldn’t be profitable. (If you want the chapter and verse on that argument, it’s here.)

Even the positions that could be characterized as anti-slavery (8-11) or pro-slavery (1-3) were substantially different from one another in important ways.

This isn’t a case where, sure, there were subtle distinctions within each of the “two sides,” but there were basically two positions. There weren’t. And, oddly enough, had the pro-slavery rhetors been willing to think and argue pragmatically about the long-term ethical and economic consequences of slavery, they wouldn’t have started an unnecessary war. (Had slaver states taken the most expensive option—free and colonize the enslaved people and be recompensed—it would have cost them less than the war they started.)

And, if at this point, you decide I’m wrong and won’t check my sources because you’ve decided I’m out-group, then you’re making the same mistake that pro-slavery rhetors did.

Because pro-slavery rhetors decided that the complicated world of possible policy options about slavery was actually a binary, they murdered people who criticized slavery, instituted a gag rule in Congress, criminalized criticism of slavery, and started a war they lost.

Pro-slavery rhetors should have taken seriously the criticisms of their position. They should have been open to pragmatic discussions about policies, instead of turning a complicated situation into a binary of identities.

What does all this have to do with the BSAB (Both Sides Are Bad) position? I’ll get to that in the next post.

Arguing like an asshole: obvious problems, and obvious solutions

Secretary of Defense Robert McNamara in front of a map of VN
Photo from here: https://www.nytimes.com/2009/07/07/us/07mcnamara.html

I’ve spent a lot of time arguing with assholes. Because I’ve spent a lot of time arguing with all sorts of people.

I was at Berkeley for many years, and argued with all sorts of people–anarchists, Democrats, environmentalists, evangelicals, feminists, Libertarians, Maoists, Moonies (they were terrible-car–crash-can’t-look-away bad at arguing), Republicans, Stalinists, Trotskyites, vegetarians. If you’re paying attention, then you’ve noticed I argued with everyone, including people with whom I agreed, but I disagreed with them on some point that seemed important to me. And some of them, even people with whom I agreed, argued in a way that I’ve come to call “arguing like an asshole.” By the way, so did I from time to time (and not everyone with whom I disagreed argued like an asshole).

Then I got on Usenet, and got to argue with (or watch arguments among) all sorts of people about all sorts of issues, from fairly trivial things (arguments about cooking methods, or dog training) to scammy (get laid fast, make money fast) to the biggest (genocide deniers or defenders). And then I drifted into other social media sites, and I took to arguing with all sorts of people with various alts. And I learned a lot about argument by doing that (also about how algorithms work, and many scams).

One of the things I learned is that, while there are some arguments that are never argued reasonably (e.g., make money fast, or get laid fast), there are assholes everywhere, albeit not evenly distributed. And that is the important point. Arguing like an asshole isn’t about what position you hold, but how you argue.

During all this time, for complicated reasons having to do with a Great Blue Heron, I was becoming a scholar of bad arguments, or, as I like to say, a scholar of train wrecks in public deliberation. And by train wrecks, I don’t mean that people made decisions that turned out disastrously because they didn’t have the information they needed (e.g., they didn’t know how cholera works), but when they had enough information to make a good decision, and they rejected it. What made (and makes) them assholes is how they rejected that information they could and should have considered.

It wasn’t necessarily because they were stupid, or corrupt, or villainous. Often they were very smart and good people who were sincerely trying to do what they believed to be the right thing.

And it was interesting to me that the train wrecks involved the same ways of disagreeing that assholes at Berkeley or in social media argued.

If, at this point, you want me to tell you the simple solution to the problem of how people (often very good people, and people whom we should admire) made disastrously bad decisions, and you want me to put it into 25 words or less, you can skip to the end. If you skip to the end and decide I’m wrong because you don’t agree with my conclusions, then you win the first gold star of assholery. Let’s call it the McNamara medal.

There are two parts to this error. First is believing that all complicated problems can be cogently and clearly summarized, and then persuasively communicated to any person, without having to go through the data; and that good and smart people can instantly recognize whether an argument is true without having to work through the reasoning. (In other words, that no situation is so complex that it can’t be easily and quickly communicated to smart people.)

Second, and related, is that the cogent and accurate summary of a problem necessarily leads to an equally cogent and easily communicable solution. The correct solution to any problem—no matter how apparently complicated—is obvious to smart and good people.

This is one of the most popular ways that countries, political leaders, business leaders, and others wreck a train: assume that every problem has a straightforward solution that is obvious to reasonable people (i.e., them). The problem is exactly as it looks to them, and the solution is the one that seems obvious to them. And if you can’t articulate the problem and solution in such a way that it’s obvious to any and everyone, then you have no clue what you’re doing. If the McNamaras of the world get pushback, oppositions, or counterarguments, they conclude that their opponents/critics are too stupid to understand an obviously true argument or too corrupt to accept it. Or both.

Assholes, regardless of the political, religious, or whatever affiliation, decide that an argument is right or wrong on the basis of whether it confirms what they already believe. Their beliefs are non-falsifiable, not in the sense that they’re so true that no one can prove them false, but in the sense that their attachment to those beliefs is not up for reconsideration. (What’s funny is that they do actually change their minds, as well as have a lot of contradictory beliefs, as well as beliefs they believe they have, but that have no influence on their behavior—we all have some of those–but I’ll get to that much later.)

There’s still debate as to whether the US could have won in Vietnam without paying an unacceptable moral, political, and economic cost, but there isn’t debate about whether McNamara’s strategy of limited war with limited means for a limited time could have worked. It didn’t. It couldn’t. Even he later admitted that. But, when he did, he failed to mention that he was told so at the time, and given all the evidence necessary to come to that conclusion as early as January of 1963.

McNamara wasn’t particularly vehement in his arguing, and he always had lots of data, but he argued like an asshole.

A more useful way to think about authoritarianism

train wreck
image from https://middleburgeccentric.com/2016/10/editorial-the-train-wreck-red/

When I found myself as the Director of the First Year Composition program, I also found myself in the same odd conversation more than once. A student would come to me outraged that they were being held to the same standards as the other students. At first I thought I was misunderstanding, but they meant it. They sincerely believed that, for reasons, it was “unfair” (that was the term they used) for them held to the same standards as other students. They weren’t claiming any kind of disability, but just … well…privilege.

I came across a similar argument when reading arguments for slavery, on the part of people who claimed to be Christian. They openly rejected “Do unto others as you have done unto you”—a way of behaving that would have made slavery impossible–in favor of some really vexed readings of Scripture. They rejected a law Jesus very clearly said in favor of problematic translations and comparisons. (In other words, they were antinomian when it came to Jesus’ laws.) For them, hierarchy was important, and the ideal hierarchy was rigid, with one’s place on the hierarchy determined by various criteria that were often regional (race, gender, wealth, source of wealth, religion, family standing, occupation, place of origin, political affiliation, and so on).

That’s how authoritarianism works. It’s a way of thinking about politics, organizations, families, and/or communities that says the ideal system is a rigid hierarchy of power (people have the “right” to dominate the people or groups below them) and privilege (people on a hierarchy should submit to those above them). That hierarchy of domination and submission means that people should not do unto others, and should not be held to the same standards. The paradox is that people who claim to be higher on the hierarchy because they are better people hold themselves and others like them to lower standards than people below them.

There are a few other interesting points about that hierarchy. People believed that the categories that justified the hierarchy were Real, created by some kind of higher power (Nature, Biology, God), and therefore Eternal.

That belief that the categories were Eternal meant that they took what were actually very recent practices and projected them back through history. For instance, pro-slavery rhetors could thereby ignore that the kind of slavery practiced in the US in the 19th century was relatively recent in almost every way, and not how slavery operated in Jesus’ time or before (the closest would be the Helots).

Another confusing paradox is that people who believe in a stable and Real hierarchy are saying, quite clearly, that they are born with certain privileges by virtue of family and so on—they will insist that they are entitled to getting better treatment and being held to lower standards—but they get very, very mad if you point out that they have privilege, so they are asserting and denying they have privilege.

At the end of this, I’ll explain my crank theory as to what’s going on with that asserting and denying of privilege, but I want to make a few other points about that hierarchy of submission and domination first. It’s very common, across various cultures, religions, organizations, businesses, but it isn’t universal. Many years ago, Arthur Lovejoy pointed out that what he called “The Great Chain of Being” has a long tradition in Western theology and philosophy. Although the term is medieval, the concept of all creation consisting of a hierarchy goes at least as far back as Plato’s Timaeus. Eighteenth century natural philosophy began the long and tragic tendency to insist on a “natural” hierarchy of ethnicities. Although Darwin was explicit that evolution was not necessarily progressive, and rejected the hierarchy of species, it was so ingrained after Linnaeus that he was largely ignored. “Darwinism” was weaponized to support a stable hierarchy of beings that was not at all what he meant.

The narrative that the hierarchy was ontologically grounded (that is Real) meant that any disruption in the hierarchy was “unnatural”—that is, a violation of nature. That claim has/had two odd consequences. It meant asserting that hierarchical systems are more stable, and less prone to conflict, which led to another backward projection: that there used to be a time of stable hierarchy, and it didn’t have social disruption.

The Catholic Church in the Middle Ages is sometimes cited as an example of such a stable hierarchy that was associated with a lack of rebellion—people will sometimes claim was stable and peaceable (Chesterton, for instance). In actuality, it was neither. While peasant revolts were fairly unusual until the 14th century, there was constant conflict in Europe, with various political and religious leaders disagreeing (quite violently) about just what the hierarchy was, all the time asserting that there was a Real and natural hierarchy, and claiming that they were enacting that Real one. And, keep in mind, these were Catholics killing other Catholics, or Christians killing other Christians. Sometimes they were major wars over religious issues (e.g., the 13th century Albigensian Crusade), sometimes executions and persecutions of heretical sects (e.g., various forms of Gnosticism), and sometimes they were political in nature. Christian troops sacked both Constantinople and Rome, after all.

Neither the political nor religious hierarchies were actually all that stable or peaceful. There were constantly heretical sects, internal conflicts—if the Catholic hierarchy created peace and order, why did the Pope have an army that was used against other Catholics?

The fantasy that there is no conflict in a rigid hierarchical structure is just that—a fantasy.

So, why do people simultaneously claim and deny that they have privilege? I think for similar reasons that people claim that there were long periods in history with no conflict. They need to believe (and claim) that hierarchy provides stability in order to feel better about their status and authoritarian politics. It’s about feelings.

The notion of a hierarchy of privilege makes people really comfortable (“I’m owed this”) and uncomfortable (because it isn’t something they did other than be born). They want to believe that they have privilege because they have earned it. But, oddly enough, they earned it by being born to their family. When they’re arguing for things to which they feel entitled because of privilege, then privilege is a useful concept, and they invoke it. But, when others point out that they might have privilege because of to whom they were born, they feel that they’re being accused of never having to work at all, and so they get mad.

But notice that I’m not saying that authoritarianism is far left, far right, or both. In fact, it’s the whole problem of authoritarianism that should make people stop trying to make politics a binary or continuum. At the very least, there are two axes—one about degree of governmental support for a social safety net (if we’re talking about domestic policy), and another one for commitment to authoritarianism. To what extent do we think people who disagree with us should be treated as we want to be treated. And it’s that second axis that is predictor of democracy ending.



Arminianism, Antinomianism, and American Politics

woodcut of puritans with hands in the air

My first introduction to American religious debates was a course taught by a prof who came from Yale’s American Studies program (I ended up taking several courses from him), and, as is oddly appropriate for someone from Yale, he was deep into the theological disputes of the 17th century—Yale was founded because of those disputes.

I’ll mention it was a great class. It changed my life, actually. We read nothing but histories of the Plymouth Plantation, beginning with Bradford, and ending with Perry Miller. It was a rhetoric of history class—this was 1978 or so (maybe 1980?), so pretty early for historiography classes for undergrads.

He emphasized that the major theological/political/eschatological debates of the 17th and early 18th centuries were both very serious and oddly binary. They were serious in that there were serious punishments for being in the wrong group (up to hanging), and yet, the criteria for heresy were incoherent. Later, when I learned more about demagoguery, I realized that the New England authorities like Winthrop or Cotton Mather engaged in pretty bog standard demagogic practices. I wrote a fairly boring (aka, very scholarly) book about it, and it shows up again in the introduction to a more recent (and less boring) book, but the short version is that authorities were committed to a theory of Biblical interpretation: Scripture is not ambiguous; it has a clear meaning that any reasonable person can understand; if there is disagreement, then it means that someone is wrong (and possibly in league with the devil), so expel or hang them.

It’s common among a lot of Christians to say that Scripture is absolutely clear, and their interpretation is indisputable. But, if that’s the case, why are there so many major disagreements and different interpretations on major issues? Paul, pseudo-Paul, Augustine, various church fathers, Luther, Calvin, and so many other major figures in Christianity disagree about central questions—such as whether to read Genesis literally, what the most important rules are, the role of grace.

So, what people are saying by asserting that their interpretation of Scripture is undeniable and obvious to any good Christian is that they’re a better Christian than Paul, and so on. If I’m particularly grumpy, I ask how good their Hebrew or Aramaic is.

I only once got a response. The person said that those people didn’t have the benefits of science we now have. Since that person’s whole position was about rejecting current science, I still have no clue what they meant. My drifting around in weird parts of the internet has a lot of interactions like that.

A particularly complicated problem in Christianity has long been the faith v. works problem. Paul and pseudo-Paul worried about it a lot; Luther worried about it more, and Calvin even more. One response is that you can get to heaven by following the laws, and faith doesn’t matter. Over time, people took to calling that Arminianism, and sometimes Judaism (Nirenberg‘s book is really good on the latter tradition). Neither Jews nor Arminius ever advocated works alone, but lots of beliefs are characterized by the name of someone who didn’t actually advocate those beliefs, and often actually condemned.[1]

Both Luther and Calvin believed that if you only behaved well because you didn’t want to go to Hell, then you were going to Hell. [If you think about that, it raises some serious questions about a lot of current proselytizing rhetoric.] I’m not sure there really have been any sects in the Judeo-Christian traditino who preached that works alone would save you–the closest I can get is the view that various theologians have criticized (behave well or you’ll go to Hell), or maybe the “fake it till you make it” argument, but the latter is a stretch.

At the other extreme is what’s usually called antinomianism (nomos is Greek for “the law”). That heresy says that it doesn’t matter what you do, as long as you have faith. Your faith cleanses your actions of all sins. While it’s hard to find many people who openly advocate Arminianism, antinomianism is more common (e.g., Rasputin, various cult leaders, abusers).

The New England Puritans (who were not, by the way, the first settlers of what is now the US, nor the first Europeans to settle in the US, nor even the first British people to establish a permanent settlement in the US) struggled with the antinomian/Arminian problem. It is a complicated problem—if you do the right things only because you’re trying to get yourself to heaven, were those acts of faith? Or just ways of looking out for yourself? If you have perfect faith that you are saved, and therefore believe that you can do anything you want…that’s a problem.

Here’s the important point: the early New England colony authorities resolved that complicated problem by saying that faith was the same as behaving as church authorities thought one should behave, and having the opinions they thought one should have. I read a lot of Puritan sermons. They didn’t pay much attention to the gospels, focusing more on Jeremiah, Isaiah, Psalms, and some Paul.[2]

For complicated reasons, at one point in my life I found myself spending a fair amount of time listening to a “conservative” (they aren’t and weren’t conservative, but reactionary) “Christian” radio station. And it seemed to me a weird combination of antinomian and Arminian.

Their major message was that you needed to have complete faith that Jesus has saved you from your sins–that faith frees you from paying attention to various laws he laid down. So, that’s the antinomian part. But, getting to heaven requires that you rigidly follow various laws, most of which appear to have been selected without a clear exegetical method (unless the exegetical “method” was “what supports my policy agenda”). That’s the Arminian part.

It seemed to me both antinomian and Arminian.

Have faith in Jesus, but ignore what he clearly said. I’ll give one of the most glaring examples. Jesus said do unto others as you would have done unto you. That is very clearly a rejection of what’s called “in-group favoritism.” But, many Christians are open that there should be in-group favoritism, that people who vote like them, believe what they believe, have their background, and so on should not be treated like others; they should be held to lower standards of behavior than non in-group members. They advocate worse punishment for non in-group members for the same actions; they want basic rights to be restricted to in-group members (“freedom of religion for me but not thee”); they express outrage at non in-group behavior that they dismiss or rationalize in in-group members.

They’re antinomian when it comes to Jesus, but Arminian when it comes to their rules.


[1] The accusation that some person or belief is “Armininian” has as much to do with Jabocus Arminius as many accusations of “Marxist” have to do with Marx, or “Freudian” practices have to do with Freud. So, this isn’t about what Arminius actually said, but about the rhetoric of early American New England Puritans. This heresy was often attributed to Catholics, but, as Nirenberg shows, has most often been associated with Jews.

[2] As another aside, I have to mention that the proof texts for Puritan sermons seemed to me—when I was working on this, there wasn’t the option of just searching digital sources—rarely had anything from the Gospels as a proof text. (Tbh, I think it was never, but I avoid using that term.) Lots of Isiaih , Proverbs, Jeremiah, Deuteronomy. I think there might have been pseudo-Paul, but I’m not sure. I hope someone has since done that quantitative research—it’d be interesting to see if there’s a correlation between purist/authoritarian self-identified Christian churches and not citing Christ.


Seeds over a wall

a path through bluebonnet flowers

A lot of people are saying that the murder of Kirk was a false flag. They are also saying that the Reichstag fire was a false flag.

That way of talking about Kirk’s murder helps pro-Trump fascism.

What matters is not whether Kirk’s murder or the Reichstag fire were false flags.

What matters is that pro-Trump figures are treating the murder of Kirk differently from how they treated the murder of Melissa Hortman. They are saying that only the murder of in-group political figures matter. They’re fine with murders of out-group political figures.

They are admitting that they do not believe that they should treat others as they would have done unto them.

Don’t focus on the question of false flag. Focus on the open authoritarianism and rejection of Jesus in their treatment of different kinds of political murder.

If you have Trump supporters in your SM world, point that out to them at every opportunity.

The binary/continuum of left v. right assumes what’s at stake.

books about by and about demagogues

It assumes that all political disagreements are really a zero-sum conflict among various kinds of people. As soon as politics is imagined that way, then we’re in a conflict about dominance—which group should be in power?

It’s also wildly ahistorical, and simultaneously false and non-falsifiable.

When I point this out to people, instead of responding to my criticisms (it’s proto-demagogic, ahistorical, false and non-falsifiable), I’m told, “Well, everyone uses it, so it must be true.” In other words, they don’t try to show it’s accurate, except to the degree that it’s self-fulfilling—if the media frames all policy disagreements as fights between two identities, people will think in those terms. That same reduction has often happened with specific policy debates—what was actually a complicated array of various possible policy options was reduced to a binary or continuum of identities (disagreements as varied as the Sicilian Expedition, antebellum slavery, or the Hetch Hetchy Debate).

Everyone agreed with the miasmic explanation of disease. That didn’t mean it was true. The miasma v. germ theory binary also wasn’t true, but taken as a given for years.

The fantasy that our policy disagreements are accurately described in terms of a single axis, even if we’re only thinking about domestic policies regarding a social safety net (so ignoring foreign policy, issues of civil rights, environmental protection) fallaciously conflates two very distinct axes: attitude toward pluralism and support for social safety net policies. A person who is in favor of the strongest of social safety nets is not necessarily someone who refuses to settle for anything less, or who believes that everyone who disagrees with them is spit from the bowels of Satan. A third-way neoliberal (a centrist) is not necessarily any more open to compromise and negotiation, or any less oriented toward thinking of everyone who disagrees as having been spit from the bowels of Satan.

Extremity of policy is not necessarily the same as extremity of commitment, let alone extremity of opposition to dissent.

The horse race/tug-of-war frame for policy disagreements sells papers and evades complicated questions about objectivity, so it was adopted in the 20th century by major media as an apparently “fair” way to cover politics (Jamieson and Patterson have both written about this for years). When the “fairness doctrine” was abandoned, hate-talk radio hosts and openly partisan media used the “us against them” frame to promote the GOP in a way that evaded engaging in reasonable policy deliberation. They advocated policies and candidates largely on the grounds that the hobgoblin of “libruls” hated those policies and candidates.

A person might be opposed to wars of choice for reasons, and opposed to the death penalty and abortion for similar reasons, and in favor of easy access to effective birth control and accurate sex education for the same reasons—thus, they have principles that they apply across policies, yet not in ways that put them in a neat place on a single axis of left v. right. But, were we to think about politics in terms of policies, we’d argue policies, and the GOP especially doesn’t want policy debates. Hence their reliance on a politics of negation—vote for us because we aren’t libruls.

Thinking about politics as a tug-of-war between two sides is necessarily connected to a way of thinking about policy disagreements—good people all know what the right policies are on every issue, and anyone who disagrees does so because they’re a bad person (they’re at the wrong place on the axis). Anyone who disagrees is spit from the bowels of Satan.

So, what could be reasonable and very difficult disagreements about the complicated and uncertain world of policy—what are our options, the relative ads and disads of various policy options, the potential consequences, the feasibility and likelihood of success—become accusations and counter-accusations of bad identity. And the less reasonable are our policy disagreements, the more the GOP benefits, since it ceased engaging in reasonable policy deliberation in the early 80s.

And, to be clear, by reasonable policy deliberation, I don’t mean simply being able to give reasons. Anyone can give reasons for anything. I mean putting forward internally consistent arguments that engage the smartest opposition arguments, and that meet the barest minimum of policy argumentation.

[When I say that, sometimes people think I mean a way of arguing that excludes personal experience, or necessarily marginalizes already marginalized groups. It doesn’t. On the contrary, it’s people in power who are most likely to fail to meet those standards because they don’t have to—as shown by the difference in reasonableness of advocates and critics of slavery. The latter were far more reasonable than the former; even though the former claimed to have positions grounded in logic and science. The same was true in of the advocates v. critics of segregation—the latter had the more reasonable rhetoric and position, despite the former’s ability to cite experts and authorities.]

Because the GOP is now the party of anti-libs, the more that opponents of GOP policies accept the (false) frame of policy disagreements as a continuum of left v. right, the more we empower pro-GOP rhetoric. The more that opponents of the GOP argue about our situation in terms of a conflict among identities—whether “centrists.” “leftists,” or “liberals” are more to blame, the more we help the GOP.

All the GOP has to do is foment conflict among its opponents, and I think they (and, tbh, Russian trolls) have done that quite effectively.

My reading of history says that we won’t get out of this by blaming other opponents of authoritarianism, or by trying to purify the opposition, or purify our commitment to a single policy agenda. I think we need to stop gatekeeping identity, and make a coalition of people opposed to GOP authoritarianism, and work together to save democracy. I think that’s the only thing that works in this situation. But I’m open to persuasion on that. Not by deductive arguments about what should work, nor arguments that X must work because what the “Dems” have been doing hasn’t worked (that’s the fallacy of the false dilemma), but arguments from history as to what has worked in similar situations.

The Politics of Purity

people arguing
From the cover of Wayne Booth’s _Modern Dogma-

My area of expertise is how communities make bad decisions—train wrecks in public deliberation. These are times that big and small communities made a decision that resulted in an unforced disaster.

And the way this happens is oddly consistent. From the Athenians deciding to invade Sicily to Robert McNamara refusing to listen to good advice as to what to do in regard to Vietnam, individuals and communities that make disastrous decisions have a similar approach to disagreement:

• The most persuasive/powerful rhetors persuaded large numbers of people that this actually complicated issue is really just a just a question of dominance between Us and Them (and Them is always a hobgoblin).

• The more that oppositional rhetors accept that false framing of policy questions—Us v.Them—the more that they help (unintentionally or intentionally) those who hold the most power in the community. They’re helping to prevent thorough deliberation about the complicated situation.

• Once things are framed this way, then legitimate questions of policy can’t get argued in reasonable ways. If you disagree about in-group policy, then you’re really consciously or unconsciously out-group. Public disagreements aren’t about whether a proposed policy is feasible, likely to solve the problem, worth what it’s likely to cost, might have unintended consequences—they’re really about who you are and where your loyalties are.

• Instead of trying to give voters useful information about the policy agenda of various groups, media accepts the frame of policy disagreements as really a conflict between two groups and proceeds to treat policy disagreements through a motivistic and race horse frame because it seems “objective.” It isn’t. It’s toxic af, and depolitizes politics.

• Even worse is the rhetoric that reframes policy disagreements as an issue of dominance. As though, instead of people who can work together reasonably to find good solutions, politics is some kind of thunderdome fight.

What I’m saying, and have tried to say in so many books, is that the first error that makes a train wreck likely is to deflect the responsibilities of reasonable policy argumentation by saying that there is no such thing as reasonable disagreement about this issue. In those circumstances, to ask for reasonable policy deliberation on this issue is taken as proof that you’re not really in-group, and therefore you can be ignored. Under those circumstances, we too often end up with a politics of purity.

There’s an unfortunately expensive book Extremism and the Politics of Uncertainty that is a collection of essays from a symposium of political psychologists. And what turns up again and again in that book is that, when faced with an uncertain and complicated situation, people have a tendency to become more “extreme” in their commitment to the in-group. I would say that the scholars are describing a desire for more in-group purity—that the in-group should expel or convert dissenters, members of the in-group should be more purely committed, the in-group should refuse to work with other groups, and the policies should be more pure. While I understand why the scholars in the book describe this process as more “extreme,” I think it’s more useful to think about it in terms of purity. After all, it’s very possible for people to believe that we must purify ourselves of everyone who isn’t a centrist.

By “politics of purity” I mean a rhetoric (and policy agenda) that says that our problems are caused by the presence in the in-group of people who are not fully committed to an individual (the leader), a specific policy agenda, or the group. In any of three forms (and they’re not fully distinct, as I’ll explain below), the attraction of this approach to politics comes, I think, from its mingling ways of thinking about the power of belieeeving, what I think of as the P-Funk fallacy, the just world fallacy, what Eric Fromm calls “escape from freedom,” social dominance orientation, and the process(es) described by the political psychologists in the collection mentioned above. (Probably a few others.)

If you take all that and create a politics of purity oriented toward an individual (people must have a pure faith in the leader), then it’s charismatic leadership. The advantage to a leader of creating a politics of purity about an individual is that, as Hitler observed, policies can be completely reversed without losing followers. It’s worth remembering that, even as Allied troops were crashing through the west and Soviet troops through the east, and the horrors of the Holocaust were indisputable, about 25% of Germans still supported Hitler. They believed he’d been badly served by his underlings. For complicated reasons, this is pretty common–admitting that one’s commitment to a leader is irrational, let alone a mistake, is incredibly difficult for people. Often, in-group members don’t even know what the leader’s policies are, and are therefore completely wrong about what the leader has done, is doing, or intends to do.

It’s also important to note that charismatic leadership is never on its own. People enter a charismatic leadership relationship because there is an effective media promoting a particular narrative about the leader. In fact, refusing to pay attention to criticism of the leader is one of the ways that people keep their commitment pure.

Insisting on a pure commitment to a policy agenda has a pretty clear history of factionalism, splitting, heresy-hunting, and even politicide, generally to the detriment of the group, and, paradoxically enough, to their ability to get their agenda through. There’s so much purifying of a group (i.e., expelling heretics) that there isn’t time for making strategic alliances with partially compatible individuals or groups. And, often, such alliances are demonized (often literally, as in the history of Christianity–just think about the wars of extermination engaged in against other Christians).

The second (purity of commitment to a specific policy agenda), I think, tends to morph either into the first (charismatic leadership, as happened with Stalin) or the third (a pure commitment to the group). It seems to me that, in the latter case, it’s a charismatic leadership relationship, but oriented toward the group, and it has all the dangers of charismatic leadership. “Believe, obey, fight,” as Mussolini said–he didn’t say, “Reason. Listen. Deliberate.”

There’s inevitably a move to retell history in terms of what will enhance obedience and fanatical loyalty rather than accuracy. Instead of hagiographies about the individual leader, the history(ies) of the group are entirely positive, triumphalist, and dismissive of criticism. Orwell talked a lot about this in various writings, especially Homage to Catalonia and his journalism.

What all three politics of purity do is depoliticize politics, by expelling, criminalizing, demonizing, or dismissing reasonable disagreements about policies. They characterize disagreement as a failure on the part of some people to see what is obviously the correct course of action.

We disagree about policies not because there are people who have gone into Plato’s cave and emerged knowing the true policies we all need to have, and others who are looking at shadows on the wall, but because any policy affects different people in different ways. While not all positions are equally valid, I don’t think there is a policy on any major issue that is the only reasonable one. We disagree about policies because, as Hannah Arendt says, political action is always a leap into the uncertain and unknown.