Trump supporters, like Stalinists, refuse to look at any evidence that might complicate their views

(Jose Luis Magana / Associated Press) https://www.latimes.com/politics/story/2021-01-07/capitol-violence-dc-riots-how-to-explain-to-kids



I’ve spent a lot of time arguing with Stalinists (I was in Berkeley for many years), and no one so much reminds me of arguing with them as arguing with Trump supporters. Neither Stalinists nor Trump supporters could (or can) reasonably engage opposition arguments. In fact, like Stalinists, Trump supporters refuse to look at anything written by someone who doesn’t fanatically support Trump. Because, like Stalinists, they think that “being rational” means “being fanatically committed to our leader.” They ignore that people who actually have a rational/reasonable position can make an argument that responds to the best opposition arguments.

I’m happy to engage in a reasonable discussion with any Trump supporters who did read this far.

(That would be zero. If I’m wrong, please let me know.) So, this post is about how to think about how Trump supporters argue.

I grew up in a family of arguers, and it sometimes ended up in violence. But it didn’t always end there, and so I got interested in the relationship between argument and violence pretty early on.

For reasons too complicated to explain, I ended up taking rhetoric classes. In those days, the Berkeley Department of Rhetoric was (I now understand) very oriented toward neo-Ciceronian understandings of rhetoric—that is, what might be called responsible agonism. It’s rhetoric as the area (not discipline) of responsibly engaging the best opposition arguments.

And so, since I was in Berkeley, I spent a lot of time arguing with the four kinds of communists (who spent most of their time breaking up each other’s meetings), as well as Libertarians, Republicans, liberals (we can improve things through incremental changes), various kinds of environmentalists, constructivist and essentialist feminists, and everyone except Moonies (since they wouldn’t argue, or even admit they were Moonies).

I think I learned the most about argument by arguing with Stalinists. Maoists and Trotskyites didn’t even try to argue with me—once they found out I disagreed, they just said, “Come the revolution, motherfucker, you’re the first one up against the wall.” It’s weird how often I was told that.

What I think of as “Stalinists” didn’t call themselves that—maybe Leninists? I’ve forgotten the terminology—but they defended every single thing the USSR did. It could do no wrong. As it happens, for complicated reasons, I had visited the USSR in 1974 (or so, maybe 1973?), and I had no love for the USSR. It would take me another twenty years to find the terminology to describe what they were doing (demagoguery), but the short version is that if the USSR was accused of doing something wrong—if I said I’d actually seen something, or there was an documented event—they refused to think about it. Anything that might complicate their commitment to the USSR, they dismissed as anti-USSR propaganda.

They said it was, so to speak, fake news.

They were suckers. Anyone who refuses to consider evidence that they might be wrong is a sucker.[1]

Sometimes the Stalinists would argue with a bit, but they too would eventually say, “When the revolution comes, you’re the first up against the wall, motherfucker.” In other words, because they couldn’t defend their position rationally, they resorted to threatening me.

They couldn’t defend their position reasonably because it wasn’t a reasonable position. And that’s why they had to resort to threatening me.

That’s why so many Trump supporters threaten or harass anyone who disagrees with them. That’s why so many gun nuts threaten or harass anyone who disagrees with them. That’s why Trump supporters end up shouting at people over Thanksgiving dinner. Because they can’t argue any better than a Stalinist—because, in fact, they can’t argue in a way that responds reasonably to critics of their position. If you can’t respond reasonably to your best critics, you have a bad argument.

What Stalinists couldn’t do (and Trump supporters can’t do) is hold themselves, their in-group, or their in-group arguments to the same standards they held/hold anyone who disagreed with them. That’s what it means to have a rational argument—not that you have a calm tone, or that you have data, but that you hold yourself and your opposition(s) to the same standards of proof and logic as you hold yourself. The way I got Stalinists so mad was pointing out that they held themselves to lower standards than they held others’ arguments. And that’s why Trump supporters get so mad at me now. They’re mad that I’ve pointed out that even they think their argument will fall apart if they have to treat opposition arguments reasonably.

In other words, Trump supporters (like Stalinists) agree with me that they can’t defend their arguments reasonably. And that’s why they engage in ad hominem, motivism, whaddaboutism, and threats.

The difference is that Stalinists didn’t care if they were reasonable. Like Trump supporters, they were clear that they held their beliefs because those were the beliefs of their group—they believed what it was loyal to believe, and they refused to consider any data that might complicate their loyalty to Stalinism. Trump supporters similarly believe what it’s loyal to believe in order to support Trump, and they refuse to look at anything that might complicate their fanatical loyalty. But Trump supporters claim to follow Jesus.

Jesus said, “Do unto others as you would have done unto you.” Trump supporters rage when their position is misrepresented, when people make fun of them, when people cite bad data, when he is treated as they wanted HRC or do want Hunter Biden treated. They rage at “libruls” who, they say, live in a propaganda bubble.

So, do they treat others as they want to be treated?

Nope.

Were Trump or his supporters followers of Jesus, then they would never misrepresent others’ positions, lie, cherry-pick, refuse to engage the smartest opposition, or argue as they do.

Trump supporters reject Jesus because they worship someone who treats as others as he doesn’t want to be treated, and their worship of him means that they treat others as they don’t want to be treated.

There are two ways to make a Trump supporter incoherently, foaming-at-the-mouth, pound on the table mad: 1) ask them if their commitment to Trump is open to falsification—what evidence would cause them to reconsider their commitment? 2) ask them if they are willing to hold their out-group(s) to the same standards they hold Trump.

They get triggered because they’re very sensitive. While they have a position they can, in their minds, support with lots of data, even they know that their arguments are such fragile gossamer that they disappear if touched with the slightest breath of a reasonable opposition argument.

Here’s how Trump supporters can prove me wrong: they link to sites that support Trump and engage the opposition arguments as they want their arguments treated, arguments that hold themselves and others to the same standards of evidence, proof, and logic. Or they PM or email me to have a reasonable discussion.

Here’s how Trump supporters prove I’m right: they attack me personally, harass me, make an argument about “libruls,” or otherwise admit that it isn’t possible to support Trump and follow Jesus’ rule about treating others as they want to be treated.

Maybe they should think about that. Jesus didn’t mumble.

[1] That doesn’t mean we have to consider every piece of evidence that contradicts what we believe.

What a speed freak taught me about argument v. argumentation

What I learned from someone who said Stephen King and Richard Nixon conspired to kill John Lennon

Berkeley had a Department of Rhetoric, and I was a rhetoric major. So, I took a lot of classes in which we thought carefully about argument (the enthymeme was the dominant model). At some point, I became aware of someone who had sandwich boards about how Richard Nixon and Stephen King conspired to kill John Lennon.

He had a ton of data. He reminded me of Gene Scott, a guy on TV in CA who would sit in a butterfly chair and give all sorts of data supposedly proving something or other. The data was true. Deuteronomy really did specify the cubits of something, and those cubits, if added to the number of Ts in Judges really did add up to something. But the conclusions were nonsense (iirc, he made various predictions that turned out to be false).

Conspiracy Guy (CG) had two sandwich boards, one with the cover of a major publication, and the other with another (maybe Newsweek and Time?). One had Nixon on the cover, and the other had Stephen King. And CG did an impressive close analysis of the two covers. What did it mean that there was a bit of yellow here? It must mean something—it must be conveying an intention. And he could find a way that it was expressing the desire to kill John Lennon.

Since I was trained by New Critics, I was familiar with essays about “what does purple mean in Oscar Wilde’s Portrait of Dorian Grey?” I even helped students write those essays. The assumption was that every authorial choice means something—it is conveying a message to the enlightened reader. (Btw, purple means nothing Portrait.) Being a good reader means being the person who catches those references that seem meaningless to the unenlightened.
Nah, it doesn’t. It means you’re over-reading. I realized this when I was watching this guy on the street make an argument for why Stephen King and Richard Nixon had conspired to kill John Lennon on the basis of his close reading of the two magazine covers.

He had a ton of data, and all of it was true. There was yellow, the people were looking a particular way; if you squinted you could see this or that, and so on. He also had good sources, Time and Newsweek. So, if we think of having a good argument as having claims that are supported with a lot of data from reliable sources, he had a good argument. But it wasn’t a good argument. It was nonsense.

What he taught me is the difference between data and evidence. What he also taught me is that people mistake quantity of data for quality of argument, and that some people (especially paranoid people) reason from signs rather than evidence. What I mean is that he had a conclusion, and he looked for signs that his conclusion was right. We can always find signs that we’re right, but signs aren’t evidence.

His argument was nonsense. Were Stephen King and Richard Nixon involved in a conspiracy to kill John Lennon, there’s no reason they would have signalled that intention via magazine covers determined independently and some time in advance. CG was mistaking his interpretation for others’ intention–a mistake we all make. It’s hard to remember that something seeming significant to us doesn’t mean someone else was signifying a semi-secret message.Were CG making a rational argument, then his way of arguing (who is on the cover of the two magazines) would always be proof of a conspiracy. But it isn’t. Or else every week there are some really weird conspiracies going on. It’s only “proof” when it supports his claim. That’s what I mean by someone reasoning by “signs.” The notion is that there is a truth (what we already believe) and data that supports what we believe are signs that we’re right.

People who believe in “signs” rather than evidence believe that the data that we’re right (“Nixon’s left eyebrow is raised”) is a sign and data that we’re wrong (the argument makes no sense) should be ignored. So, it’s always a circular argument.

In other words, data is right if and only if it confirms what we already believe, and it’s irrelevant if it doesn’t. If we think about our world that way—what we believe is true if we can find data to support it, and we can dismiss all data that complicates or contradicts our beliefs—then our beliefs are no more rational than a speed freak on a street in Berkeley going on about Stephen King and Richard Nixon. He was wrong. If we argue like he did, we’re just as wrong.

The two sides myth: preformationism v. epigenesis

Great Feuds in Science describes a feud you don’t hear about much. If you do hear about it, you hear a strategically vexed version.[1] For years, there was a debate about the origin of life—what makes something come alive? It’s conventional to say that there were “two sides” on this issue—that’s how it was described in its era, and how it’s generally narrated.

What I want to do is use that example to show that describing a situation as having two (and just two) sides leads to a misunderstanding of the issue(s) even when everyone agrees that there are only two sides. That something can be mapped as two sides doesn’t mean that’s an accurate way to think about it. If we reduce complicated issues to two sides, then we ask: which group is right? And, since that’s the wrong question, we’ll get a wrong answer.

Because positions with important differences get blended into one, people end up engaging in the fallacies like straw man and nutpicking without realizing it.

In the 18th century, it was conventional to believe that there were two camps on the issue of the origin of life: preformationism and epigenesis. Hellman summarizes preformationism: “all embryos existed, preformed though infinitesimally tiny, in either the egg or the sperm” while “plants were thought to arise from preexisting miniature organisms hidden in the seed” (68). In other words, if two humans have sex, there was in either the sperm or the egg (there was some disagreement on this point) a teeny, tiny person, a humonculus. That being just gets bigger as they grow. Preformationism was wrong.

Beliefs are not autonomous mobiles floating in space. They are entangled with other beliefs—as proof, conclusion, or (most commonly) both at the same time. Preformationism was both the evidence for and conclusion of the belief that God created all of creation at one moment. That argument runs like this: preformation is right because it supports the notion of a static creation and the notion of a static creation is right because preformation supports it. It’s a mobius strip of reasoning.

Hellman doesn’t give a precise definition of epigenesis, nor do various other sources, because it was defined through opposition—not preformationism. One version, advocated by Needham among others, was spontaneous generation , basically the idea that life springs from dead matter.

Needham boiled mutton gravy, put it in a container sealed with cork, and heated it to a point that people believed was enough to kill any living thing. And there was life that sprang up (worms). He was clear that he had proof. (He didn’t—part of my point in this post is that data is not proof.)

According to Hellman, atheists used Needham’s experiment to support their case. That’s the mirror image of the logical mistake that preformationists made. The atheist argument accepts the associations preformationists insisted were necessary–that preformation proves God’s static creation. Since they were wrong about preformation—which was supposed to be proof of God–, they were wrong about how creation happened, and therefore wrong about God. Notice that this is a valid argument only to the extent that the entire world of possible scientific, religious, and political beliefs is really a world of only two possible positions, and that preformationists were right in associating religious belief with preformationism. They were wrong. So were the atheists. Not because being an atheist is wrong, but because those associations were wrong, and Needham’s experiments were bad.

Voltaire argued that Needham was wrong (he was), but he did so with arguments no more rational than Needham’s. And, that Needham was wrong in arguing for spontaneous generation doesn’t necessarily mean he was wrong in arguing against preformationism, let alone wrong about creation or God. (As it happens, he was, but so was Voltaire.)

If you treat a complicated issue as two sides, then you can believe that showing any person (or specific claim) on “the other side” is wrong means you’ve shown that whole side is wrong about everything. You haven’t. You’ve misunderstood and misrepresented the issue. Both Needham and Voltaire were right that the other was wrong, but they were wrong in thinking they were right.

Here’s what I mean. An old, but I’ve come to think very useful, concept in argumentation is that affirmative and negative cases are different. We tend to conflate them. Or, more precisely, we tend to treat a solid negative case as though it’s a solid affirmative case.

An affirmative case is one in which I say that my policy, claim, or party is right. A negative case is one in which I say that your policy, claim, or party is wrong. An effective negative case is not a rational argument for an affirmative. If I believe that bunnies are communists, and you believe that they are Zoroastrians, we each have an affirmative case we need to make. (Bunnies are communists; bunnies are Zoroastrians.) If I make an effective negative case (you have not shown that bunnies are communists), I have not just shown that my affirmative case is true (bunnies are Zoroastrians). That’s the mistake that Voltaire made.

But, so very, very much of our public discourse makes Voltaire’s mistake. Both Needham and Voltaire had strong negative cases; neither had affirmative cases stronger than a weak sneeze.

If we ask the wrong question, we will always get a wrong answer. If we ask, which of these two groups is right?, we’re asking the wrong question.

If we assume that all of our policy options are defined in terms of two identities, or a continuum between them, then we are arguing policy no more rationally than Needham and Voltaire. We might be right that they are wrong, but that doesn’t mean that we are right that we are right. Their being wrong doesn’t make us right.

[1] You read about how Pasteur showed spontaneous generation was wrong. Various people, including Voltaire, had also shown it was wrong, but they did so in favor of a grand narrative that was just as wrong. People who want to have a narrative of science that is about truth-tellers opposed to religious bigots don’t like to talk about people like Voltaire. There are a lot of things they don’t like to talk about, like eugenics. Another binary we need to abandon is scientists v. bigots. If we could step away from talking about social groups, we might be able to talk about ways of reasoning and arguing in favor of policies/claims. I’d like that.





A short list of fallacies

broken table
image from https://www.sportsfreak.co.nz/super-bung-bung/broken-table/

Arguments are always series of claims; a valid argument is one in which the claims are connected. Think of it like a table—if the legs aren’t connected to the tabletop, then the table will fall over. Fallacious arguments are ones that lack legs entirely, or in which they aren’t connected to the tabletop. In most disagreements, we are in the realm of “informal” argumentation; that is, when formal logic doesn’t necessarily help us. Often, what determines whether an argument is fallacious isn’t simply the “form” of the argument, but how it works in context.

Productive disagreements need the people disagreeing (the “interlocutors”) to argue about the same issue, use compatible definitions, fairly represent one anothers’ positions, hold one another to the same standards, and allow each other to make arguments.

There are lists of fallacies that make very fine distinctions, and are therefore very long and detailed—this is a list that seems to work reasonably well for most circumstances.

Fallacies of relevance

A lot of fallacies break that first condition: they are claims that aren’t relevant to the disagreement, but they are inflammatory. They either distract people into arguing about irrelevant topics or else shut down the argument altogether.

Red herring. Some people use this term for all the fallacies of irrelevance. Red herrings are claims that distract the interlocutors (or observers) from the trail we should be following. The phrase probably comes from a story in which someone drags a red herring across the trail of a rabbit to fool the pursuers. (“red herring”); the claim someone has made is so stinky that people get distracted.

Argumentum ad hominem/ad personum/motivism. Contrary to what many people think, an attack on an interlocutor is not necessarily ad hominem. It’s only ad hominem (or fallacious) if the attack is irrelevant. Attacking someone’s credibility on the grounds that they don’t have relevant authority, accusing someone of committing a fallacy, or pointing out moral failings is not necessarily fallacious, if those factors are relevant. If I say that you shouldn’t be believed because you’re a woman, and your gender is irrelevant to the argument, then it’s ad hominem. Ad hominem often takes the form of accusing someone of being part of a stigmatized groups, such as calling all critics of slavery “abolitionists” or any conservative a “fascist.” Sometimes that derails the disagreement, so that we’re now talking about how to define “socialist,” and sometimes it is so inflammatory that we stop having a disagreement at all and are just accusing one another of being Hitler. A somewhat subtle form of ad hominem is what’s often called motivism; i.e., a refusal to engage an interlocutor’s argument on the grounds that you know they’re really making this argument for bad motives. Sometimes people really do have bad motives, but they might still have a good argument. The problem with motivism is that it’s often impossible to prove or disprove someone’s motives.

Argumentum ad misericordiam/appeal to emotions. As with ad hominem, appeal to emotions is not always a fallacy—it’s a fallacious move when it’s an attempt to distract, when the appeal is irrelevant. All political arguments (perhaps all arguments) have an emotional component—otherwise, we wouldn’t bother arguing. If I argue that something is a bad policy because it will cost one million dollars, I’m appealing to the feelings we have about saving or spending money. If you say it’s a bad policy because it will kill ten children, you’re appealing to feelings just as much as I am. Those appeals to emotion are fallacious if they’re irrelevant (e.g., our current policy costs a million dollars and kills ten children, then the new policy isn’t a change in either factor, so those arguments are probably irrelevant), or if they’re being used to distract from other issues or end the disagreement. If, for instance, I refuse to discuss any aspect of the policy other than cost, or I engage in hyperbole about what will happen if we spend a million dollars, then my argument is a fallacious appeal to emotions. It’s also fallacious if I say that you should vote for me because I have a really cute dog, I’ve had a hard life, I’ll cry if you don’t vote for me—those are all fallacious appeals to emotion. Crying to get out of a traffic ticket is a fallacious appeal to emotions. (And that example brings up the problem that fallacies are often effective.)

Tu quoque/whataboutism. This fallacy is the response that, “You did it too!” It’s fallacious when whether the interlocutor did it is irrelevant. The problem with tu quoque is that, if I’ve lied, pointing out that you lied doesn’t mean that what I said was true. We’re now both liars. Sometimes the fallacy involves false equivalency. For instance, if you and I are running for Treasurer, and I say that you’re a bad candidate because you embezzled, and you say that I embezzled too, that might be fallacious. If you’ve been Treasurer of multiple organizations and embezzled substantial amounts every time, and I once took a pen home for personal use, it’s fallacious (it’s also the fallacy of false equivalency—one argument can be multiple fallacies at once). If I say that honesty is the most important thing to me, and I condemn someone else for lying, and I’m lying in that speech, that I’m lying while condemning liars might be a relevant point. At that point, you might talk about my motives and not be involved in motivism—you can point out that I don’t appear to be motivated to engaging in rational argument.

Appeals to personal certainty/argumentum ad vericundiam/bandwagon appeal. When we’re arguing, appealing to an authority is inevitable. Appeals to authority are fallacious when they’re irrelevant—the site, source, or person being appealed to is not an authority, is not a relevant authority, has not made a claim relevant to the argument. For instance, if I say that squirrels are evil, and my proof is that I’m certain of that (appeal to personal certainty), then, unless I’m a zoologist who specializes in squirrels, my opinion is irrelevant. Appealing to a quote from Einstein would also be irrelevant—while he’s an expert, he was never an expert about squirrels. Quoting Einstein “God does not play dice with universe” does not help in an argument about theism, since he isn’t a theologian, he was refuting quantum physics, and he later changed his mind about quantum physics—it isn’t a relevant claim or made by someone with relevant expertise. Saying that something is true because many people believe it (bandwagon appeal) is another form of appeal to irrelevant authority—many people have been wrong about things before. That many people believe something is relevant for showing it’s a popular perception, but probably not for showing that it’s true.

Fallacies of process

In formal logic (if p then q) a process is valid or not regardless of context, but in informal logic, it’s more complicated, and we often end up having to talk about whether something is a fallacy because there is a way in which the claims are related, but weakly, or not related but might appear so, or they don’t necessarily follow. The notion of whether something necessarily follows is important. The claim that “A caused B” might be true (“Being hungry caused me to eat cookies”), but the two terms aren’t necessarily related—I might have eaten something else. When things are necessarily related, then A always causes B. Fallacies of process involve claiming that B follows from A when it doesn’t.

Binary reasoning. Some people argue that this fallacious way of thinking is behind a lot of fallacies of argument. Binary reasoning is the tendency to put everything into all or nothing categories (black or white thinking). So, a person is either a Christian or a Satanist, Republican or Democrat. Since situations are rarely a choice between two and only two options, putting things into binaries is frequently fallacious.

Genus-species fallacy /fallacy of composition/fallacy of division/cherrypicking. Drawing a conclusion about an entire category (genus) from a single example (species) is a fallacy, or even from a small set of examples. We tend to fall for that fallacy because of confirmation bias, a bias that means we notice (and value) data that confirms what we already believe. We’re also prone to let striking examples mean more than they should, simply because they come to mind (called “the availability heuristic”). An example is useful for illustrating a point, but they rarely prove it. Coming to a conclusion about a large category on the basis of one example is moving from species to genus (fallacy of composition) such as assuming that because the one French person you knew liked tap-dancing, all French people like tap-dancing. The more common fallacy is to move from genus to species (fallacy of division), assuming that, since something is part of a large category, we can assume that it has the characteristics we attribute to that big category. For instance, it’s fallacious to assume that, since the person is French (genus) they love croissants (species). Even if the characteristic is statistically true of the majority in that category (most Americans are Christian), it’s fallacious to assume that the individual in front of you necessarily fits that generalization. Picking only those examples (studies, quotes, historical incidents) that fit your claim is generally called “cherrypicking.”

False dilemma/poisoning the wells. If there are a variety of options, and one of the interlocutors insist there are only two, or insists that we really only have one (because they have unfairly dismissed all the others), then that person has fallaciously misrepresented the situation. “You’re either with me or against me” is a classic example of the false dilemma, especially since “with me” usually means “agree with everything I say.” You might disagree with something I say because you’re “for” me—you care about me, and think I’m making a bad decision.

Straw man/nutpicking. We engage in straw man when we attribute to the opposition an argument much weaker than the one they’ve actually made. We generally do this in one of three ways. First, if people are drawn to binary thinking, then they’re likely to assume that you’re either with us or against us. For instance, if they think a person is either completely loyal to a political party or they’re a member of the “other” party, then they’ll assume that anyone who disagrees with them is a member of the “other” party. (So, if I’m a binary thinker, and a Republican, and you criticize a Republican policy, I might assume that you’re a Democrat and then attribute to you “the” argument I think Democrats make.) Second, we will often unconsciously make an opposition argument (or even criticism) more extreme than it is—you’ve said something “often” happens, but I represent your argument as that that something “always” happens. Third, we will often take the most extreme member of an opposition group and treat them as representative of the group (or position) as a whole—that’s often called “nutpicking” (a term about which I’m not wild).

Post hoc ergo propter hoc/confusing causation and correlation. This fallacy argues that A preceded B, so it must have caused B. Of course, it isn’t always a fallacy—if A always precedes B, and/or B always follows from A, they must have some kind of relationship. The relationship might be complicated, though. While a fever might always precede illness, reducing the fever won’t necessarily reduce illness. Lightning doesn’t cause thunder—they’re part of the same event.

Circular reasoning. This is a very common fallacy, but surprisingly difficult for people to recognize. It looks like an argument, but it is really just an assertion of the conclusion over and over in different language. For instance, if I argue, “Squirrels are evil because they are villainous,” that’s a circular argument—I’ve just used a synonym. Motivism sometimes comes into play here. For instance, I might say, “Squirrels are evil because they never do anything good. Even when they seem to do something good, like pet puppies, they’re doing so for evil motives.” That’s a circular argument.

Non sequitur. This is a general category for when the claims don’t follow from each other. It’s often the consequence of a gerfucked syllogism. Sometimes people are engaged in associational reasoning.


A few other comments.

An argument might be fallacious in multiple ways at the same time. For instance, arguing that anyone who disagrees with me is a fascist who wants to commit genocide is binary thinking, ad misericodiam, motivism, and almost certainly straw man. And, once again, identifying a claim as a fallacy almost always requires explaining how it is fallacious.

Another way of thinking about fallacies is that they are moves in a conversation that obstruct productive disagreement. If you think about them that way, you get a list with a lot of overlap, but some differences.









Citations.
“red herring, n.” OED Online, Oxford University Press, June 2020, www.oed.com/view/Entry/160314. Accessed 15 July 2020.

The ten rules for rational-critical argumentation

excessively complicated map of policy argumentation
Image from here: https://csl4d.wordpress.com/2017/12/27/policy-argumentation/

I’ve often mentioned that I think Van Eemeren and Grootendorst’s rules for rational-critical argumentation are useful. But they’re written in a way that makes them really hard to understand, and I’ve long wanted to put them into more straightforward language. I’ve procrastinated doing that because first I have to explain a bunch of things. The first is one that most people don’t even consider: what are we doing when we disagree?

We’re in such a world of neoliberalism that the assumption is that we’re trying to sell each other something, or we’re competing for a market. But the notion that discourse must be a sales pitch is just one way of thinking about disagreement.

I’ve written and re-written about the various ways of thinking about what we might be trying to do when we disagree, and what I’ve written always ends up heady and abstract and hard to follow. So I’m going to go with a flawed analogy, one I’ve lifted from Aristotle.

Let’s think about wrestling. Also, let’s imagine the wrestlers are Winston and Emma (just so I don’t end up in ambiguous pronoun reference).

Why are Winston and Emma wrestling?

They might be wrestling because they’re trying to kill each other. This wrestling has no rules, no limits, and no goal other than the permanent extermination of the other.

They might be wrestling as champions of their communities; they’re not trying to exterminate the other, but to destroy the other’s political power, and generally to gain some specific political outcomes (change in territory, control of the government, exploitative relationships legalized). In other words, this would be modern warfare in light of the possibility of community judgment– post-Geneva convention warfare.

Or, perhaps, they’re wrestling for even more specific policy outcomes. They’re wrestling over who gets the salmon tonight. Tomorrow, they’ll wrestle again for who gets it tomorrow. This kind of wrestling may or may not have limits on what is allowed. If it doesn’t have limits, it’s outcome-specific demagoguery; if it has limits, particularly regarding tone and civility, then it’s decorous argument (note that’s “decorous,” not “rational”).

Perhaps Winston is a bully, or a faux-bully, who talks a lot about how he beat up others, and he’s using that status as a strong guy to recruit others to his group, or encourage them in their bullying. Emma might choose to wrestle with him to show he’s a bully and a fraud. Since this is most effective when it stays within the rules for rational-critical argumentation, I always think of it as the rational-critical alpha roll. (The point isn’t to engage Winston in rational-critical argumentation, since he probably isn’t interested in it, but to show show that he isn’t, and to shame him. Some people argue that’s what Socrates is doing in some dialogues.)

They might be wrestling as part of a for-profit show, in which everything is scripted, and they’re just following their scripts because the pay is great. This is argutainment. The point is the conflict, not resolving it, because the conflict becomes unprofitable the second it’s resolved. So, Emma and Winston have to keep fighting. But that’s also unsatisfying, since the audience will attach to one or the other.

The most profitable version of this scripted wrestling is that Winston is in-group for the audience, and always nearly loses, and rarely loses, and in which Emma cheats egregiously while the ref isn’t looking. Sometimes, after Emma has cheated relentlessly, Winston cheats once and wins. So, his win looks like payback. It’s still scripted, and it’s still really for show.

Another kind of argutainment is so dominant that I think I have to mention it. This is when Emma and Winston don’t actually wrestle at all. Winston wrestles with a plastic doll that has “EMA” written on it (or a man filled with straw) and wins (what a shock). I think of this as straw man argutainment.

Emma and Winston might be members of a college wrestling team, and the point of their wrestling is to bring honor to their college. (Or just to win.) There are lots of rules. This is decorous agonism.

Perhaps they’re friends, and they think it’s fun to wrestle. They each want to win, but not badly enough to hurt the other. There’s no referee because they’ll try to be fair. This is friendly wrangling.

Perhaps they believe that wrestling is a really good sport because it gives a healthy kind of flexibility and strength, and they want to wrestle with each other in order to improve themselves and each other. When we make the analogy to argumentation, this is rational-critical argumentation.

Sometimes Emma and Winston aren’t wrestling with each other at all. This is the tai-chi of argumentation, in which people simply admire the moves an individual makes. This has two types. One is very rigid, and says that there is a right way to make every move, and Emma and Winston can be assessed as to which one most fits the correct form, regardless of whether it’s actually a good way to wrestle. Let’s call this standardized testing. The second is that Emma and Winston each demonstrate the moves they like to make, and they simply watch each other, perhaps learning, perhaps not. I tend to think of that as the expressive model.

Generally, when people set out a list, it’s an expeditio—a list that sets one up for being the right choice. I think every one of these is a valid choice, depending on the circumstances. Every single one is also a bad choice, depending on the circumstances.

[As an aside, I’ll say that one grump I have about scholarship in rhetoric and writing is that it too often begins by assuming that only one of the above goals is valid, or that we all have to agree as to which is the model we should be promoting. That notion that there is only one kind of correct public discourse is a claim that can’t be defended through rational-critical discourse, which is kind of funny if you have the excessively pedantic sense of humor I have. I’m on the side of people arguing for various goals, various needs, various means, and teaching students that there are those different ways of arguing.]

One more piece of background information before I can get to the ten rules. The market model of knowledge says that the belief that sells the most is the best belief—that’s a version of the argutainment model. It says that the argument that pleases the most people is the best. There is, as far as I can tell, no evidence that claim is anything other than a Moebius strip of justification. Slavery, Nazism, eugenics, surgeons refusing to wash their hands, mullets—all of those meet the market model of belief standard for good belief. It’s a bad model. What’s popular, especially when not all opinions are weighted equally (the market model gives more preference to the opinions held by people with more money), is not necessarily what is ethical, in the long-term best interest of the community, or what the majority of people want.

If Winston and Emma are disagreeing about who should do the dishes, they could see it as a zero-sum argument—they win to the extent that they get the other to do the dishes. Their disagreement then becomes a way to get the other to submit. They’re either in outcome-specific demagoguery or decorous argument still oriented toward getting their way. If Winston and Emma see their disagreement about the dishes as a question of who wins, who gets the other to submit, or who is the better person, they’re seeing the disagreement about the dishes as just one of many instances that are really about a zero-sum contests as to which of them is a better person (or which one is doing more, or sacrificing more).

Fuck that shit. I had that marriage. It was bad.

So, let’s imagine that Winston and Emma disagree deeply but they don’t think the other is evil. They have, basically, two ways of approaching the disagreement that will serve them well. One is the expressive model, in which they each express what they believe, and they try to understand the other. Agreement, persuasion, argumentation—all of those are off the table. It’s just about listening. This way of approaching disagreement is incredibly powerful, as shown by projects like Hands Across the Hills or Divided We Fall.

That model is about resolving about our serious cultural problems that come from people who breathe deep in a media world that relies on the demonization of others. The expressive model is vexed when it comes to systemic issues, ones that don’t necessarily rely on the conscious intentions or feelings of individuals. Imagine that Winston refuses to wear a mask. He doesn’t intend to infect others or get infected; he thinks that, by doing exactly what his media tells him to do, he’s showing his individuality and independent judgment.

There is no way to get Winston to understand the irrationality of his position (and it is irrational) from within the expressive or argutainment model. From within those models, his position seems fine.

So here we are at the rational-critical model. It isn’t persuasive. It doesn’t work within the market model of discourse. It isn’t about selling anything. It isn’t about making everyone feel good. It isn’t about an agent who gains compliance on the part of the object.

It’s about both Emma and Winston believing, simultaneously, that their positions are so right that they can withstand the strongest counterarguments, and that they might be wrong, so they’re open to disproof. And these are the conditions of disproof. I find that, when I’m talking about this issue, I have to emphasize that these are not the rules everyone has to follow in every conversation (that’s why there’s this long lead up). You can have a great conversation without following these rules. If you’re playing soccer, and you pick up the ball and run with it, you’ve either committed a foul or you aren’t playing soccer any more. You might have just invented rugby.

If I say, “Here are the characteristics of warblers,” someone saying, “But kangaroos aren’t like that” is not actually proving me wrong. Kangaroos are great; I’m not saying they aren’t. But they aren’t warblers.

One more piece of background information. Because we are so polarized, if I say anything about Democrats or Republicans, hot cognition is triggered, so let’s imagine that there are two political parties—one led by Chester (called Chesterians), and the other led by Hubert (Hubertians), and they disagree about the best methods of keeping squirrels (considered bad by both parties) from getting to the red ball (considered good by both parties). Winston is a Chesterian, and Emma is a Hubertian.

Okay, the rules.

1. Freedom rule
“Parties must not prevent each other from advancing standpoints or from casting doubt on standpoints.”

This rule prohibits argumentum ad baculum—Winston can’t threaten to hurt, fire, or harm Emma for disagreeing with him and still have their discussion be a rational-critical disagreement. Of course, there are lots of situations in which a good and productive disagreement might have Winston telling Emma she is not allowed to make certain arguments. If Emma is CEO and Winston is the company attorney, and Emma advocates a course of action that could get them sued, Winston would be wise to say, “If you advocate that ever again, I will quit as your attorney.” Winston might threaten to fire Emma if she keeps making racist arguments; Winston might threaten to break up with her if she says abusive things to him. It isn’t a rational-critical disagreement, but Winston might be wise to decide that a rational-critical argument was never on the table anyway.

Appeals to emotion aren’t necessarily a problem in rational-critical argumentation. They are fallacious (argumentum ad misericordiam) under some circumstances. If Winston says that it will break his heart if Emma makes certain arguments, and Winston really doesn’t want to hear that argument, he can set that boundary, but it isn’t a rational-critical disagreement from that point on.

In other words, people can set boundaries for discussions; if they can’t agree on those boundaries, then they might need to have a rational-critical disagreement about what those boundaries are. It might not be possible for them to agree on boundaries; it might be an issue that isn’t subject to rational-critical disagreement, or one of the people involved might be incapable of arguing rationally about it.

2. Burden of proof rule
“A party that advances a standpoint is obliged to defend it if asked by the other party to do so.”

In general, the rule of thumb is that the affirmative (“A is B” or “A leads to B”) has the burden of proof because negatives (“A is not B” or “A does not lead to B”) can be hard to prove. For instance, if Emma and Winston are arguing about whether a politician, Hubert, is racist, it’s going to be almost impossible to have a good conversation unless Winston first says why he thinks Hubert is racist (he’s making the affirmative case, affirming that something is true). Then Emma can refute it (since she has the negative case, saying that Winston’s claim is not true). But, once Emma starts to refute that claim, then she has the burden of proof to support whatever claims she is making (such as that Winston has a bad definition of “racist”).

People try to avoid the burden of proof by shifting the stasis (that is, trying to change what the argument is about). Motivism, ad hominem, genetic fallacy, and various fallacies that result from binary thinking fall into this category. If Emma says to Winston, “Oh, you’re just saying that Hubert is racist because you’re a Social Justice Warrior, and you think you’re so woke,” that’s motivism and ad hominem (Emma gets a twofer!). She’s violated this rule because she’s trying to make Winston’s character the issue rather than Hubert’s racism. If Emma believes that only Chesterians think Hubert is racist, and she believes that all Chesterians are socialists, and all socialists are Stalinists, then she might say, “Oh, Hubert is racist? Well, how did that whole gulag thing work out?” and try to engage Winston in a defense of Stalinism. That’s a violation of this rule—she’s trying to make Stalinism the issue.

Most people arguing for conspiracy theories violate this rule—the more that they’re claiming there is a huge coverup, the more likely they are to avoid the burden of proof. People arguing about the existence of God throw the burden of proof back and forth like a long and boring tennis game.

A move that is often (but not always) a violation of this rule is the fallacy of tu quoque (sometimes called the accusation of hypocrisy). If Winston says, “Hubert is racist,” and Emma says, “Well, what about that time that a Chesterian said something racist?” she might be violating the rule. It depends on what claim Winston is making. If Winston is claiming that Chesterians are better than Hubertians, what she’s saying is relevant. If he’s saying that Hubert shouldn’t be in charge of the Senate Committee on Diversity and Inclusion, it’s irrelevant, and a violation of this rule.

This point—what are we arguing about?—is important for understanding fallacies, since a lot of moves are fallacious because they’re irrelevant. If Winston says, “Chester is a young and strong dog who can withstand the stress of protecting the red ball,” then Emma pointing out that Winston has a long history of lying about Chester’s health is relevant. It’s part of a rational-critical argument. But Emma arguing that Winston shouldn’t be believed because he likes Nickelback is an ad hominem since it’s irrelevant.

If Emma points out that Winston has often lied about Chester’s health and so shouldn’t be believed now, and Winston says that Emma really hurt his feelings, and she owes him an apology for hurting his feelings, he’s trying to shift the stasis to the question of his feelings. If he says that Emma shouldn’t criticize him because he recently broke a nail, and he’s really upset about it—it’s either a violation of the first rule (some claims are off the table) or this one. Or both!

3. Standpoint rule
“A party’s attack on a standpoint must relate to the standpoint that has indeed been advanced by the other party.”

This rule prohibits the straw man fallacy—if Emma has a complicated and nuanced argument, and Winston attributes to her a really stupid argument, he’s violated this rule.

People violate this rule while thinking they’re making good arguments for three reasons: first, in-group/out-group thinking (which reduces everything to us v. them); second, and closely related, the tendency to think in paired terms; third, and perhaps most important, inoculation.

In a culture of demagoguery, and we’re in one, people believe that our vexed, complicated, varied, and nuanced world of policy options is reduced to two groups: us and them. Us is narrowly defined, and “Them” is simply anyone who is not Us. The research on us v. them thinking (in-group v. out-group) is clear that people committed to this way of thinking about the world homogenize the out-group. So, if your in-group is Wisconsin Synod Lutherans, and you’re deep in a culture of demagoguery, then you’re quite likely to believe that Evangelical Lutherans, Muslims, atheists, Satanists are pretty much all the same. [1] Therefore, you think you have proven that this ELCA person is bad by presenting an example of something a Satanist did or said. [2]

This rule and the “unexpressed premise rule” have a complicated relationship. In a good argument, people sort them out. In the fallacious version, the unexpressed premise is inferred by identity: the sort of person who argues this is a member of that group, and they also argue that. An example of false inferences from identity would something like this. Imagine that Emma argues that we should be nice to little dogs, and Chesterians are known for hating little dogs, then Winston might infer that she must not be Chesterian. If Chesterians are also known for hating squirrels, then Winston might infer that Emma must like squirrels. (That’s how the false inference about ELCA Lutherans being Satanists works.)

It feels like a logical inference, but only if Winston falsely assumes that all Chesterians are the same. The way his argument works is:
Everyone is either A or B. All A do C. All B do D. Emma does not do C; therefore, she must not be A. Therefore, she must be B; therefore she must do D.

(Everyone is either Chesterian or Hubertian; all Chesterians hate little dogs; Emma does not hate little dogs; therefore, she must be Hubertian; all Hubertians like squirrels; therefore, Emma must like squirrels.)

His whole chain of inferences becomes at best a possible inference if there are options other than A or B (Chesterians or Hubertians), most (but not all) Chesterians hate little dogs, and so on. Winston is attacking Emma on a point not related to the standpoint she actually advanced.

4. Relevance rule
“A party may defend a standpoint only by advancing argumentation relating to that standpoint.”

This rule is pretty straightforward; again, it’s about staying on-topic. It prohibits fallacies of relevance—such as ad hominem, ad misericordiam (irrelevant appeal to pity), ad vericundiam (irrelevant appeal to authority), and non sequitur (the large category of drawing a conclusion that doesn’t follow).

As mentioned above, an attack on the character of an interlocutor isn’t necessarily irrelevant and therefore not necessarily fallacious. Similarly, appeals to emotions or authority aren’t necessarily irrelevant. All arguments have an emotional connection—we disagree because we care about something. If we didn’t care at all—if we had no emotional attachment to the issue—we wouldn’t bother disagreeing. If Winston argues that being nice to little dogs helps squirrels get to the red ball, it’s because he believes that squirrels getting to the red ball is a bad thing. He doesn’t want it to happen. He is afraid of it happening.

If Emma believes that the Chesterian position about little dogs causes unnecessary cruelty to little dogs, then she cares about little dogs; it makes her sad. People who argue that a policy is good because it will save a lot of money or it’s bad because it will cost a lot of money have an affective attachment to money; they like it.

If Winston and Emma are disagreeing about whether little dogs are conspiring with squirrels, and Winston tells a highly emotional story about how a little dog once took food from a Great Dane puppy, that’s a violation of this rule. Not because it’s highly emotional, but because it’s irrelevant.

Appeals to authority are similar. Imagine Emma says, “Little dogs are not involved in the conspiracy; I am personally certain of this.” That’s probably an irrelevant appeal to authority—it’s an appeal to her personal conviction, and her personal conviction is irrelevant. It’s only relevant if she is an expert who has read every study on the issue, and looked at all the evidence. Emma saying, “Well, Ruth has concluded that squirrels are not involved, and she is a Supreme Court justice” (or Nobel prize winner, famous professor at a prestigious university, person with impressive degrees, tremendously successful entrepreneur) is a violation of this rule, since there isn’t a Nobel prize in the squirrel conspiracy.

Similarly, appeals to Scripture, a quote from Einstein, something your stylist told you that her brother-in-law’s chiropractor’s lawyer told him is an irrelevant appeal to authority.

It’s possible to have really fun and interesting conversations in which non-experts speculate on topics, but it’s just shooting the breeze.

The last fallacy of relevance I want to mention (there are lots more) is the big category of non sequitur. There are lots, and many lists of fallacies split them into different kinds. But, basically, they all come down to a tendency we have to think a true argument is a valid argument, and a true argument has the form of “true statement because another true statement.”

Emma might believe that “little dogs are good because many bunnies are fluffy.” Many bunnies are fluffy, but that has nothing to do with whether little dogs are good (although, personally, I do think they are). That argument about bunnies is irrelevant, even if true, so it’s a violation of this rule.

5. Unexpressed premise rule
“A party may not deny premise that he or she has left implicit or falsely present something as a premise that has been left unexpressed by the other party.”

This one is really hard for some people to understand—that an argument they’re making might assume a premise of which they’re unaware. They think that you know what you’re assuming. We’re especially likely to violate this rule when we adopt an argument from another source that sounds good, and we haven’t really thought it through.

I got into this argument recently. Someone said something along the lines of, “Liberals are idiots because they appeal to stereotypes.” That’s appealing to a stereotype, but the argument assumes that appealing to stereotypes is idiotic. So, the person was saying they’re an idiot. I couldn’t get them to understand that their argument logically assumed a premise they didn’t believe. They got mad because they thought I was calling them an idiot, and I couldn’t get them to understand that by their own argument they were an idiot. They were calling themselves an idiot, and that’s what made it a bad argument.

We’re responsible for our premises. A lot of interesting disagreements arise because we disagree about the premises, and so we end up having to talk about things like whether stereotypes are bad, if we can reason without them (we can’t), what distinguishes good from bad stereotypes.

6. Starting point rule
“A party may not falsely present a premise as an accepted starting point nor deny a premise representing an accepted starting point.”

This violation of the rule often goes by the phrase “begging the question” (a phrase that leads to a lot of confusion, since people now use that phrase to refer to something else entirely—when something we’re arguing leads us to have to consider another question), or “assuming what’s at stake.” It’s really a kind of circular argument.

So, if Emma were to say, “Okay, we both agree that size is unrelated to goodness,” that would violate this rule, since Winston assumes size and goodness are related. (Socrates does this all the time in Platonic dialogues, tricking his interlocutor to agree to a premise they don’t actually believe.) Van Eemeren and Grotendoorst give examples of people sliding premises into an argument via adjectives, adverbs, nouns or noun phrases (if Emma were to refer to “the ridiculous notion that size and goodness are related,” “Chester’s dishonestly arguing that,” “the delusion,” or “the proposition only promoted by idiots that…”).

Again, I’m not saying those sorts of moves are prohibited, but when a disagreement is in this realm, it isn’t rational-critical argumentation. It might be useful; it might be productive; it might be necessary. It just isn’t rational-critical.

I’ve run across the second part of this rule less often—when people try to deny a premise that is an accepted starting point (except in the kind of situation discussed in #5, and I don’t think that’s what they mean here). That’s probably because most of my disagreements are in social media, and so when people try to misrepresent the beginning of the argument, it’s easy enough to go up a thread and quote them.

It does happen sometimes—“I never said that…” when they clearly did. When it’s pointed out that they did say it, you can sometimes have a good conversation—they really did express themselves badly, leave out a word, use terms that have different meanings in different contexts. But if they did say it, and they won’t own it, this isn’t a good faith argument at all.

7. Argument scheme rule
“A party may not regard a standpoint as conclusively defended if the defense does not take place by means of an appropriate argumentation scheme that is correctly applied.”

There are a few ways to think of this one, and here I part company with Van Eemeren and Grootendorst. They go on to describe a really limited way of thinking about argumentation that is hard to apply for how people actually think. They don’t seem to imagine disagreements that happen within the messy world of ideological commitments (including religion). I think we are all always within that world.

That we are always arguing from within our ideological commitments doesn’t mean we’re incapable of rational-critical argumentation.

They’re making a crucial point: it isn’t just what you say, but how you’re arguing for it. Winston might argue that little dogs are part of the squirrel conspiracy by:
– relying on a single example of a little dog that was friends with a squirrel;
– finding one quote from The Book of Dog that can be read as condemning little dogs;
– arguing that since Goehring liked little dogs, defending little dogs makes you a Nazi;
– appealing to one study that said little dogs are evil;
– describing a personal experience with a little dog.

These are all argument schemes, ways of arguing.

If Winston is engaged in rational-critical argumentation (or even good faith argumentation—a lower bar, and a different post), then he is committed to viewing those ways of arguing being valid, regardless of what position they support. So, if Emma can provide a single example, find one quote from the Book of Dog, point out Hitler’s love of big dogs, cite one study, describe one personal experience, if Winston is engaged in rational-critical argumentation, he has to abandon his claim or find new evidence.

If Winston won’t abandon the claim or find new evidence, then his argument is grounded in ways of arguing he thinks invalid. Winston is admitting that he is using “argument” to defend a position he will neither abandon nor open to scrutiny.

In my experience, the sort of person who thinks a single example proves them right, but dozens of counter-examples are irrelevant isn’t open to persuasion at all. They’re also total suckers for cons because they tend to reason from in-group loyalty, and so anyone who appears to them to be in-group can sell them a used car with neither engine nor wheels.

8. Validity rule
“A party may only use arguments in its argumentation that are logically valid or capable of being made logically valid by making explicit one or more unexpressed premises.”

For me, this is compressed in the previous rule, since I’ve never run across anyone who violates this rule who didn’t also violate #7. But, basically, if you’re engaged in rational-critical argumentation, you worry about the validity of the arguments you’re making, not just whether you’ve found talking points that make you feel good about the stance you already had.

9. Closure rule
“A failed defense of a standpoint must result in the party that put forward the standpoint retracting it and a conclusive defense of the standpoint must result in the other party retracting its doubt about the standpoint.”

Eh, kind of.

A lot of arguments on social media end up with someone doing their impression of the knight that clearly lost. People need to enter a disagreement with some clear sense as to what it would mean to be proven wrong. If Emma and Winston engage in rational-critical argumentation, and Emma can’t defend her position, she really should say, “Yeah, I can’t defend this.”

And that should be an important moment of self-reflection. But she shouldn’t abandon an important belief just because she “lost” one argument. She should, however, look into why she “lost” it. Perhaps she was relying entirely on arguments her in-group media had told her; perhaps the argument moved fast, and she didn’t notice the skeezy moves of Winston; perhaps she needs to develop a more nuanced argument.

Perhaps she needs to get out of her informational enclave, and try to find and read the smartest opposition arguments.

Yeah, actually, we all need to do that.

10. Usage rule
“A party must not use formulations that are insufficiently clear or confusingly ambiguous and a party must interpret the other party’s formulations as carefully and accurately as possible.”

It’s always puzzled me that Van Eemeren and Grootendorst make this the tenth rule (Habermas makes it the first).

It seems to me that the beginning of any disagreement is that people mean what they say.

The less charitable interpretation is that this rule is silly. I’ve spent years arguing with people, and I’ve rarely run across an individual who is deliberately ambiguous or who chooses to be unclear. People say things that seem clear to us at the time. If someone posts something, and later tries to say they meant something else, we’re litigating rule #6.

There are lots of people who are deliberately ambiguous (“what is is,” “quality,” “natural”), but that’s bad faith argumentation.

So,, if you do find yourself arguing with someone who refuses to clarify their position, they’re a jerk. They aren’t just refusing to engage in rational-critical argumentation; they’re also uninteresting.









[1] I’m sorry to say that this is not one of my ridiculous hypotheticals.
[2] It’s all about paired terms, which is another post I need to write, although Perelman and Olbrechts-Tytecha already explained it very well.






Flinging claims for Trump

picture of trump
This image is from here: https://www.snowflakevictory.com/

There is a pro-Trump website telling Trump supporters “how to win an argument with your liberal relatives.” One of the main arguments for Trump was (and is) that he would get the best people to work for and with him. So, this is the argument that the best people make for Trump, or, in other words, the best argument for Trump. Does this “best arguments for Trump” webpage have good arguments?

Someone making a rational argument

  • makes claims supported with good evidence, and so presents sources for claims;
  • can identify the conditions under which they would change their mind;
  • has claims that are logically connected, avoids fallacies, and applies standards across groups (so, for instance, if you want to say that you are appalled at feeding squirrels, you are just as appalled at in-group squirrel-feeding as you are at out-group squirrel-feeding);
  • engages the best out-group arguments, or, engaging a specific set of claims that aren’t good arguments, then at least the out-group claims are being presented accurately.

Engaging in rational argumentation isn’t very hard, and it’s easy to do if you’ve actually got a good argument. Rational argumentation isn’t about what claims you make, and whether they seem true to people who already agree, nor whether people making the claims think they’re unemotional. Rational argumentation involves a fairly low bar; it’s just the list above. And that list isn’t controversial.

If you take it out the realm of politics (where people are especially tribal), then it’s clear that “rational argumentation” is actually “sensible ways to think about conflict.” Imagine that you have a boss who says that you should be fired because reasons. You’d be outraged (justifiably) if your boss couldn’t cite sources, was just operating from in-group bias, unfairly represented what you’d said, and wasn’t listening. That’s a shitty boss. And it’s reasonable for you to ask that your boss make a decision about firing you rationally.

That’s a shitty boss because it’s a person who is making decisions badly. And we’ve all had that boss. What would it be like if we extrapolated from that shitty boss, who made decisions badly, to our own tendency to make decisions badly? What if we’re all the shitty boss?

But back to the Trump page—does it present good arguments? It fails every one of the criteria for rational argumentation.

For instance, it not only fails to link to sources to support its claims but it never links to an opposition.

Why not? Why not link to data that would support the claims it’s making? Why not link to the opposition with their, supposedly, terrible arguments? Well, perhaps because it can’t because then it would be clear how false the page’s claims are. Take one example. On two of the links, the claim is that “the 2020 Democrats are the ones who want to strip you of your private, employer-provided health insurance!” (“Trump approach”) That’s a lie in two different ways. First, some of the main candidates argue for something, single-payer health care, that might cause people to choose not to get health insurance from their employment, but instead from the government-based insurance—that’s what the pro-Trump healthcare page goes on to argue. So, the Dems don’t “want” to strip people of their private insurance—some Dem candidates want to give people a choice. (Sanders is the only one who has unequivocally said he would get rid of private insurance, not something, by the way, that a President can do without Congress.) If, as the pro-Trump page claims, so many people leave private insurance that the rates become unmanageable that would be because the government-funded insurance program is better than the private. In other words, this argument is an admission that the current system is inadequate.

Second, many Democratic candidates have not endorsed any such plan, so the claim that “the Dems” are advocating it is simply a lie. If what you’re saying is true, you don’t have to lie.

There is only one place that the site gives a link—to Biden saying that he insisted that a Ukrainian prosecutor get fired. The page admits that this claim has been debunked, but without any explanation or argument¬, insists it’s true. That isn’t an argument: that’s just direct contradiction.

That argument about Biden and the Ukraine is fallacious in that it is tu quoque (or, “you did it too!”). Whether Biden asked that the Ukrainian prosecutor be fired in order to prevent an investigation of his son’s activities has no relevance to whether Trump told Ukraine that he would withhold foreign aid (which he did, in his version of the phone call). Whether Trump is now refusing to allow people to testify in a trial—that is, obstructing justice—has nothing to do with anything Biden did. Tu quoque is how little kids argue—when caught with a hand in the cookie jar, claiming that little Billy also stole cookies is irrelevant. You might both have stolen cookies. But that’s a fallacy that runs throughout the pages—Trump’s reducing environmental protections is good because China is bad. Trump’s healthcare plan is good because the Democrats’ is bad. They might both be bad.

The set of claims about Ukraine has another fallacy that runs throughout the site: it says that “under President Obama, Ukraine never received this kind of lethal military aid AT ALL. It is thanks to President Trump, that the Ukrainians are getting the aid in the first place.” That is an example of the fallacy of equivocation (also called the fallacy of ambiguity), of an argument that is technically correct, but deliberately misleading (much like Bill Clinton’s “it depends upon what is is”). It looks as though it’s saying that Ukraine never got military aid from Obama AT ALL, something that is false.  Technically, it’s saying that Ukraine never got “this kind” or “the aid”—meaning the Javelin missiles. That’s technically true, just as it was technically true that Clinton was not, at the very moment, having sex with an intern. But it’s misleading.

It’s hard to argue with someone engaged in equivocation, since it necessitates getting into the technicalities—that’s why people who aren’t arguing in good faith (that is, whose minds are not open to persuasion) engage in it.

Another common strategy of this site is to give Trump credit for what Obama or other Presidents did. For instance, the page on the environment begins, “America’s environmental record is one of the strongest in the world and the U.S. has also been a world leader in reducing carbon emissions for over a decade. We have the cleanest air on record and remain a global leader for access to clean drinking water.” Notice that this claim is vague, and so hard to disprove (like an ad that says, “We have the best prices”—compared to whom?): what record? Not the world record. It’s seventh.  It isn’t even clear to me that the US now has the cleanest air in its record. But we can’t know what the claim is because it gives no sources. Similarly, the claim that “President Trump has taken important steps to restore, preserve, and protect our land, air, and waters” is unsupported, unexplained, and unsourced.

To the extent that the air is cleaner, it’s because of what was done in the past, by other people, particularly Obama, but also the Congress that passed the Clean Air Act and the 1990 Amendments.

The final problem with the page that I’ll mention (I could go on) is one that contributes significantly to the demagoguery of the page (and it is demagoguery): the implication is that anyone who disagrees with Trump is a “liberal,” and that simply isn’t true. A large number of people who believe that Trump should be convicted are conservative.

In short, the page doesn’t engage in rational argumentation. It doesn’t even engage in argument. So, would someone following the script provided by this webpage win any argument with any “liberal”? No. Because they wouldn’t be arguing. They’d be making claims, claims that are sometimes false, often misleading, almost always unsourced, and always unsupported, but never argued.

A person who followed this script and claimed to have won the argument would be like someone who claimed to have won a chess game because they turned over the board and fed the pieces to the dog.

If Trump can’t be supported with rational argumentation, then maybe it isn’t rational to support him.

Trump supporters/critics and policy argumentation

I spend a lot of time in public and expert realms of political dispute. And, one thing I’ve noticed in the last two years is that, in the public areas, supporters of Trump have stopped engaging in rational argumentation about him, but they used to. They’re not even engaging in argumentation at all. They’ll sometimes do a kind of argumentative driveby, popping into a thread that’s critical of Trump in order to drop in some talking point about how he’s a great President, and then leaving. Sometimes they give a reason for refusing to engage in argumentation, and it’s an odd reason (critics of him are biased). This is worrisome.

We’re in such a demagogic culture—in which people assume that the world is divided into fanatics of left v. right—that I have to say what should be unnecessary: not everyone who supports Trump is just repeating talking points. In fact, I can imagine lots of arguments for Trump’s policies that follow the rules of rational argumentation (and I’ve seen them, but not in the public realm).  I think Trump’s policies can be defended rationally. Apparently, his supporters don’t.

And that is what worries me.

What I’m saying is that there are people who do just repeat talking points (all over the rich and varied place that is the public sphere) and the kind of people who have always just repeated pro-Trump talking points used to be  following advice on how to engage in argumentation, and now they’re not. That kind of Trump supporter has stopped engaging in argumentation at all.

Just to be clear: I mean something fairly specific by the term “rational argumentation” (not how “rational” is used in popular culture, and argumentation, not argument—this will be explained below). While I’m not a supporter of Trump, I do think his policies can be defended through rational argumentation—that is, a person could argue for them while remaining within the rules described below. That means, oddly enough, that I don’t think Trump’s policies are indefensible, but his followers seem to think they are.

That’s worrisome.

I’ve spent a lot of time wandering around the digital public sphere, and thinking a lot about politics. And I’ve come to think that we are in a culture of demagoguery, in which every policy question is reduced (or shifted) to a zero-sum battle between “us” and “them.” That reduction is false and damaging. There are not two sides to any policy issue—there are far more. And our political culture is not a binary.

Personally, I think a useful map of our political culture would be, at least, three-dimensional, and even then you’d have to have different maps for different issues. But that’s a different post.

In my wandering, I’ve noticed that you can see talking points created by a powerful medium that are then repeated by people for whom that medium is an in-group authority. This isn’t a left v. right thing. (No issue is.)  The talking points on “get rich fast” shifted when James Arthur Ray killed some people; the same thing happened on the “get laid quick” sites after the Elliot Rodger shooting. The talking points on dog sites changed after a study about taurine came out. I know what Rachel Maddow said on her show without watching her show; the same is true of Rush Limbaugh.

The pro-Trump (like the pro-HRC or pro-Sanders or pro-Stein) talking points used to be a mix of what amounted to tips on what to say if you’re engaged in policy argumentation and what amount to statements of personal loyalty (“s/he is a good person because s/he did this good thing”).

And you could tell what the talking points were by what your loyal pro-Trump or pro-Stein (or pro-raw dog food) Facebook friend (or Facebook group) asserted.

What worries me about the driveby dropping of a pro-Trump talking point and refusal to engage policy argumentation is that it suggests that the pro-Trump sources of argumentative points have abandoned policy argumentation. These people aren’t even trying. That’s puzzling.

What makes arguing in some digital spaces interesting is that people are now often arguing with known entities—I’m watching someone make arguments about Trump whom I watched make arguments about Clinton or Obama.

What I’m seeing, in places that used to have rational-critical argumentation in favor of Trump, is that people aren’t even trying. (So, just to be clear, anyone saying that my argument can be dismissed because I’m not pro-Trump is showing that I’m right.)

What I want to use as the standard for a “rational” argument is van Eemeren and Grootendorst’s ten rules for a rational-critical argument. They are:

    1. Freedom rule
      Parties must not prevent each other from advancing standpoints or from casting doubt on standpoints.
    2. Burden of proof rule
      A party that advances a standpoint is obliged to defend it if asked by the other party to do so.
    3. Standpoint rule
      A party’s attack on a standpoint must relate to the standpoint that has indeed been advanced by the other party.
    4. Relevance rule
      A party may defend a standpoint only by advancing argumentation relating to that standpoint.
    5. Unexpressed premise rule
      A party may not deny premise that he or she has left implicit or falsely present something as a premise that has been left unexpressed by the other party.
    6. Starting point rule
      A party may not falsely present a premise as an accepted starting point nor deny a premise representing an accepted starting point.
    7. Argument scheme rule
      A party may not regard a standpoint as conclusively defended if the defense does not take place by means of an appropriate argumentation scheme that is correctly applied.
    8. Validity rule
      A party may only use arguments in its argumentation that are logically valid or capable of being made logically valid by making explicit one or more unexpressed premises.
    9. Closure rule
      A failed defense of a standpoint must result in the party that put forward the standpoint retracting it and a conclusive defense of the standpoint must result in the other party retracting its doubt about the standpoint.
    10. Usage rule
      A party must not use formulations that are insufficiently clear or confusingly ambiguous and a party must interpret the other party’s formulations as carefully and accurately as possible.
      These are rules for rational-critical argumentation, so these rules aren’t ways that people have to engage in every conversation.

For instance, I’m not saying that people involved in a discussion can never say that some arguments are off the table, or that people can never refuse to engage with another party (although both of those moves would be violations of Rule 1). I’m saying that, when that rule is violated, the person whose views were dismissed and the person doing the dismissing are not engaged in rational argumentation with each other. They might still have a really good and interesting conversation, or a really fun fight, but it isn’t rational argumentation.

And what I’m saying is that in various places I hang out, supporters of Trump used to engage in argumentation to support their claims, but they’re doing it much less—in fact, not very often. If they don’t do a driveby (one post and out), they say that they won’t argue with anyone who disagrees with them because that person is biased.

Both of those moves—one post and out, and refusing to engage with counter-arguments because the very fact of their being counter-arguments makes them “biased”—is a violation of Rule 1. While they assert that criticizing Trump means a person is so biased that their views can be dismissed, that’s a thoroughly entangled and irrational argument (it’s even weirder when the accusation is “Trump Derangement Syndrome”–it’s weird because many of the people who fling around the accusation of Trump Derangement syndrome still suffer from Obama Derangement Syndrome).

That’s a misunderstanding of what “bias” means and how it functions in argumentation. Of course people are biased—that’s how cognition works—but, if a person is so biased that it’s distorting their argument, then their arguments will violate one of the ten rules. Dismissing a position because the person is biased is a violation of Rule 1. It’s a refusal to engage in rational argumentation.

More important, this move is a rejection of argumentation, and democracy. Rejecting criticism of Trump on the grounds that criticizing Trump shows that the critic is biased is not just an amazingly good example of a circular argument, but a move that makes it clear that the person doesn’t want to listen to anyone who disagrees. Argumentation and democracy share the premise that we benefit from taking seriously the viewpoints of people with whom we disagree.

We are in a culture of demagoguery, in which far too much public discourse, all over the political spectrum, is about how you shouldn’t listen to that person because s/he is biased. And the proof that they’re biased? That they disagree.

If a person is biased, and we are all biased, but their arguments can be defended in rational-critical argumentation, then their arguments are worth taking seriously, regardless of the bias of the person making the argument.

Jeremy Bentham, in the 18th century, identified the problem with dismissing an argument because you don’t like the person making it. Sometimes it’s called the genetic fallacy, and sometimes it’s motivism.

In any case, any person who supports Trump refusing to engage anyone who criticizes Trump on the grounds that that person is “biased” is engaged in the fallacy of motivism (so a violation of Rule 8), and violating Rule 1. (And, so is anyone refusing to engage a Trump supporter if it’s purely on the grounds of their being a Trump supporter.)

Dismissing a person’s position as irrational because they do or don’t support Trump is the admission of an inability to have a rational argument with that person. If I refuse to engage in argumentation with any Trump supporter, purely on the grounds that they support Trump, then we have to start wondering about whether my criticism of Trump can be rationally defended. And, while I see many people who make exactly that move—dismiss the person, not the claims, from even the possibility of rational arguments, because the person supports Trump—I do often see people trying to engage in argumentation with Trump supporters.

I’m not seeing Trump supporters willing to engage in argumentation. I see them willing to make claims, but not engage their opposition rationally. And, as I said, that’s new.

One of the ways of not engaging the other side that I see a lot of people (all over the political spectrum) use is to violate the third rule. That is, imagine that Chester says he really likes Trump’s 2018 missile strikes against Syria, and thinks those were an appropriate response, it’s unhappily likely that Hubert will respond by saying, “Oh, so you think children should be thrown into concentration camps?” Chester didn’t say he liked all of Trump’s policies, let alone his policies regarding families trying to enter the US.

There are two very different arguments that Chester might be making: “Trump is a good President as is shown by his good judgment regarding the Syrian missile strikes” or “Trump’s missile strikes against Syria were wise policy.” Trump’s immigration policy might be relevant for the first argument, but not the second. An even more troubling way of violating the third rule is for Hubert to decide that all Trump supporters are the same, and, therefore, since some Trump supporters deny evolution, and Chester is supporting a particular policy of Trump’s, to attribute evolution denial to Chester. Interlocutors make that (fallacious) move because they believe that the world is divided into two groups, and that the opposition is a homogeneous group—you can condemn any individual out-group member by pointing out a bad argument made by any other out-group member.

[This is another move that people all over the political spectrum make, and it makes me want to scream.]

Right now, one of the pro-Trump talking points is that the economy is strong, and that shows Trump is a great President. People drop this into arguments about issues that have nothing to do with the economy. Even more troubling is that it seems to me that the people making the argument don’t defend it—it’s often one of the argumentative drivebys—but, more important, it’s often irrelevant.

Most recently, I saw it in a thread where someone had made a comparison between Hitler and Trump, about the comparable chaos in the two administrations. And dropping into that argument was a kind of horrible example of why that move—criticism of Trump on X point is false because the economy is good– was a perfect example of violating the fourth rule (about relevance). Whether Trump has improved the economy doesn’t invalidate the claims about how the chaotic administrations are comparable.

That argument also violated Rule 5, in that the unexpressed premise of that argument is that a political leader who improves the economy is good. And Hitler greatly improved Germany’s economy—for a while. So it was a particularly bungled attempt to disprove a point.

I’m seeing that talking point a lot, made by people who would not give Obama credit for improving the economy—saying that Obama simply benefitted from what the Bush Administration had done. So, when the economy is strong, and it’s a President they like, they attribute the economy to the President; when they don’t like the President, they don’t (this, too, is far from unique to Trump supporters).

That’s a violation of the eighth rule—the argument that “Trump is a good President because the economy is strong” has the unexpressed premise of a strong economy meaning that the current President is good. The people who make that argument for Trump but not Obama (or vice versa) reject the validity of their own premise.

For instance, I’m now seeing people who believed any horrible thing about Obama, who worked themselves into frenzies about Michelle Obama’s sleeveless dress, Obama’s golfing, his vacations, the cost to the US of his vacations, the Clinton’s possibly having financially benefitted from their time in the White House, Bill Clinton’s groping, HRC’s problematic security practices regarding classified information defend a President who has done worse on every single count.

They are not reasoning about what makes a good President grounded in claims that apply across all groups.

This is rabid factionalism. This is being foaming-at-the-mouth loyal to your in-group, and then finding reasons to support that loyalty (such as the one free grope argument).

People who are loyal to their in-group engage in motivated reasoning. And, let’s be honest, we all want to be loyal to our in-group. In motivated reasoning, there is a conclusion the person wants to protect, and they scramble around and find evidence to support it—they are motivated to use reason to support something they really want to believe. That isn’t rational, and it leads to arguments that can’t be rationally defended because a person trying to make a case that way has unexpressed premises in one set of claims that are contradicted by the unexpressed premises in another set of claims.

When it’s pointed out to someone that they can’t rationally defend their claims about Trump, I often see them respond, “Well, [example of a Democrat being irrational or having made an irrational argument].”

This is a fairly common kind of response, as though any bad behavior on the part of anyone on “the other side” cleans the slate of any in-group behavior. This fallacious move (a violation of Rule 7) relies on the false premise that any political issue is really a zero-sum contest of goodness between the “two sides.” Since it’s a zero-sum (as though there is a balloon of goodness, and if you squeeze one side, then there is more on the other), then any showing “badness” on the “other” side squeezes more air into yours.

A Trump critic making an irrational argument doesn’t magically transform an irrational pro-Trump argument into a rational one. Now they’re both irrational. It isn’t as though there is a zero-sum of rationality between the “two sides.” (For one thing, there aren’t two sides.)

This is really concerning in a democracy. Ideally, people should be arguing for policies rationally–which isn’t to say unemotionally—notice that none of these ten rules prohibits emotional appeals. The eighth rule, about logical validity, and fourth, about relevance, imply prohibition of argumentum ad misercordiam—which is not the fallacy of an emotional appeal, but the fallacy of irrelevant emotional appeal.

I’m not concerned that there are people who support Trump; I’m not concerned that there are Trump supporters who are clearly repeating talking points from their media; I’m concerned that those talking points are clearly not intended to be used in policy argumentation; I’m concerned that support of Trump is not even trying to fall within the realm of rational argumentation.
Unhappily, critics of Trump, it seems to me, are also arguing about his identity, and not the rationality of his policies.

Trump has policies. If they’re good policies, they can be defended through rational argumentation. If they can’t, they’re bad policies.

One of the most troubling aspects of the now dominant pro-Trump rhetoric is that it depends on an argument about his “success” as a businessman that is similar to the argument made about the “success” of his proposals. As it has come out that his businesses lost money hand over fist, people are arguing that he was a successful businessman because he personally succeeded financially. This isn’t an unusual argument—I was surprised when I saw it for a motivational speaker whose claims of personal wealth were exposed as completely false. The argument was, if you can rack up that much debt, that’s a kind of success. In other words, it’s saying that, as long as the method is working, it’s a good method.

That’s a little bit like describing falling out of a plane as successful flying—right up to the moment of contact with pavement.

That we are now getting a good outcome is not rational policy argumentation. Nor is that Trump is or is not a good person.

Trump shouldn’t be defended or attacked as a person, and his policies should be attacked or defended regardless of his person. Neither defending nor attacking his policies should be a reason to dismiss the argument being made. We need to argue policies.

Mo Brooks, the Big Lie, and Bad Hitler analogies

There is a media kerfuffle, and much pleasurable outrage, about an Alabama congressman quoting a foaming-at-the-mouth antisemitic section from Mein Kampf.

As is usual with the media, it’s all outrage, oddly misplaced, and misses the really important point about the incident.

Hitler says that Jews stick to one big lie, and just keep repeating it. Of course, that is what Hitler did, and was doing in the moment of the accusation.

Hitler’s point is that, if you create a big lie, you should stick to it, and insist on it, and people will accept it. And Hitler did that all the time, as in his insistence on blaming all of Germany’s problems on socialists (whom he insisted on characterizing as communist). But there is a performative point that Hitler is making, too, meaning that Hitler’s rhetorical power came not just from what he argued but how he argued.

Hitler blamed everything, including the faults of his own party, on the Jews. That was his big lie. His big lie was that Germany’s problems could be solved by excluding the impure people (Jews, Romas, Sintis, homosexuals, communists, union labor organizers, feminists, immigrants) from the community.

Brooks was, in his speech, repeating the GOP Bit Lie: that Trump didn’t collude with Russia, that it doesn’t matter if he did, and that anyone who is concerned about the issue is a socialist. There is another GOP Big Lie Brooks repeats: that Hitler was a leftist because his party was socialist.

Hitler, in that passage was (as he always was) projecting onto his out-group (“the Jews”) what he was doing in that moment.

And that is what matters about the Alabama congressman. Not that he cited Hitler, but that he was projecting. In a speech that was the repetition of a Big Lie (that Trump did nothing wrong), Brooks condemned the left (whom he called “socialist”) for doing what he was doing in the moment of the accusation: repeating a Big Lie. And that’s important.

But various leftist media instead condemned him for quoting a rabidly antisemitic passage from Hitler (e.g.). That’s an incoherent criticism. He was quoting Hitler in order to condemn anyone who disagreed with him. He wasn’t endorsing Hitler. He wasn’t endorsing Hitler’s antisemitism.

That criticism either assumes a kind of guilt by contact argument, or else assumes that it can invoke the pleasures of outrage on the part of people who won’t click through to figure out what he actually said.

I think it’s probably a bit of both, and I think both are harmful to the left. If it’s the association argument, it’s promoting the notion of pure speech, that doesn’t anything bad. If it’s the pleasure of outrage, it just makes lefties look like dumbasses.

Brooks’ argument was bad faith; it was also incoherent; it was also self-referential. Let’s take him to task for those issues, not for antisemitism.

Folk rhetorical theory and the “argumentum ad Hitlerum”

[This is a talk–a revised version of one I posted earlier–so it doesn’t have links.]

Wayne Booth once complained that, when he mentioned he was an English teacher, people on trains wanted to talk about commas. If he had told them he taught rhetoric, they would have said something about Hitler. In papers in argumentation classes, Hitler references are as common, and as welcome, as dawn of time introductions. Like dawn of time introductions, Hitler references aren’t unwelcome because they’re always wrong, but simply because they’re so easy, so thoughtless, and so rarely relevant. In politics, it’s even worse; hence the argumentum ad hitlerum fallacy, or Godwin’s law. Despite the miasma of Hitler references in politics, and Hitler’s reputation as the most powerful rhetor, teachers and scholars of rhetoric tend to avoid him.

We do so for various reasons, but at least one is that the popular (and even, to some extent, scholarly) understanding of Hitler’s power is far more simplistic than the case merits that it seems hopelessly complicated to try to get in and untangle it. I want to argue that is why Hitler should figure more in our teaching and scholarship. The popular (let’s call it folk) explanation of Hitler’s success is simplistic and inaccurate, but it’s powerful in that it fits with the folk explanation of persuasion, which fits with the folk explanation of what distinguishes ethical from unethical persuasion, which fits with folk notions about what constitutes good versus bad citizenship.

Talking about Hitler is a way of talking about the problems with all those mutually confirming, and similarly damaging, folk explanations.

And here a note about terminology: when I proposed this paper, I was strongly influenced by Ariel Kruglanski’s discussion of lay epistemology—that is, the common sense way that non-experts think thinking works. But, the more I worked on the issue, the more I realized that it isn’t a question of experts v. non-experts—Kenneth Burke, various scholars of demagoguery, some historians, and other experts assume the explanations I’m talking about. I came to think the better analogy is Christopher Achens and Larry Bartels’ discussion of what they call the “folk theory of democracy” which, as they point out, serves as the basis for a lot of scholarly work on political science and theory.

Here are the four folk explanations:

    • The folk explanation of what happened in Germany is that Hitler is the exemplar of a magician rhetor because he “swung a great people in his wake” (Burke 164), hypnotized the masses (and his generals, the generals claimed post-war). The disasters of Nazism are thereby explained monocausally: Hitler was a pure rhetorical agent, whose oratorical skill transformed the German people into his unthinking tools.
    • This explanation appeals to the folk explanation of persuasion, in which a rhetor determines an intention, identifies a target audience, and then creates a text that contains the desired message (often presented visually as an arrow) and shoots it at the target. If it hits, the target audience now believes what the rhetor wanted them to believe, and it was effective rhetoric. (Obviously, this reduces all public discourse to compliance gaining.)
    • Ethical rhetoric is one in which the rhetor, and the message the rhetor is sending, are ethical. And that is determined by ethical people asking themselves if the message is ethical (sometimes by whether the rhetor is ethical); Hitler’s rhetoric was unethical because it was intended to do unethical things. This is the folk explanation of the ethical/unethical distinction.
    • There are unethical rhetors out there, and, therefore, good citizens are ones who think carefully about the message being shot at them.. That is, the dominant popular way of describing and imagining participants in public deliberation is as consumers of a product—they can be savvy consumers, who think carefully about whether it really is a good product, or they can be loyal consumers, who always stick to one brand, or they can be suckers, easily duped by inferior products (and so on). Good citizens think carefully about the political messages they consume. Ethical citizens recognize an unethical rhetors and unethical messages, and resist them.

These are powerful narratives in that they enable the fantasy that each of us is a good citizen, an ethical person, who recognizes unethical arguments, and would, therefore have opposed Hitler, and continues to oppose anyone like Hitler. (Hence argumentum ad Hitlerum—it isn’t about the political figure in question; it’s about a performative of being an ethical person with good judgment.)

These models are refuted by theoretical work (e.g., Biesecker’s 1989 “Rethinking the Rhetorical Situation”) or empirical work on political reasoning (e.g., the work summarized in 2013 The Rationalizing Voter). They aren’t just wrong; they’re importantly wrong. They rely on a pleasurable but entirely indefensible othering of Germans.

That’s wrong, as I’ll discuss, but it’s importantly wrong because this explanation of what happened in Nazi Germany can make people feel good about themselves while they’re replicating the errors that Germans made. It says that, if you believe you are thinking critically about what a rhetor says, you are making sure it fits with what you think is ethical, and you only put your trust in someone you think is ethical, then you will never make the mistake Germans did.

This explanation of what happened in Germany is partially the consequence of post-war renarrrations of pre-war events. Large numbers of Germans post-war claimed they didn’t know about the genocides, they had nothing to do with it, and they resisted Hitler in their hearts. The Wehrmacht officers claimed they were just following orders (sometimes unwillingly), didn’t know about the genocides, and couldn’t break their oath to Hitler. Officials of churches claimed they were the real victims, and had resisted the Nazis all along.

None of that was true. Christopher Browning (Ordinary Men), Robert Gellately (Backing Hitler), Ian Kershaw (Hitler, the Germans, and the Final Solution), Michael Mann (Fascists) and various other scholars have shown that participation in, support for, or pragmatic acquiescence toward the genocides, imprisonment, and war-mongering of the Nazis were considerable and often strategic and instrumental. People were not swept up by Hitler’s rhetoric. Support for the Enabling Act was a strategic gamble. Support for Hitler and the Nazis increased after he took power because people liked the improved unemployment rate, the remilitarization of Germany, the rejection of various treaties, the reassertion of German’s entitlement to European hegemony, the conservative social agenda. Ian Kershaw says,

“The feeling that the government was energetically combating the great problems of unemployment, rural indebtedness, and poverty, and the first noticeable signs of improvement in these areas, gave rise to new hopes and won Hitler and his government growing stature and prestige.” (Hitler Myth 61)

They either liked or didn’t care about the antisemitism, jailing of political opponents, politicization of the judiciary. They didn’t think Hitler was unethical, and they didn’t think his policies were unethical. Many thought he was a decisive leader who was getting things done, and many thought he was chaotic and unpredictable, but getting them what they wanted.

For instance, the Wehrmacht was not constituted of innocent victims of Hitler’s rhetoric or hopelessly bound by their oaths. As Robert Citino says, “The officers shared many of Hitler’s goals, however—defiance of the Treaty of Versailles, rearmament, restoration of Germany’s Great Power status—and they had supported him as long as his success lasted” (Last Stand 205). The officer class helped Hitler come to power in 1932-33 because

“They saw Hitler as a fellow nationalist, a bit crude, but one who could win the masses to the nationalist and conservative cause. His opposition to Marxism, his plans for German rearmament, his anti-Semitism: all these things harmonized well with the essentially premodern world view of the officer corps.” (Citino, Last Stand 211)

That he would later destroy Germany, enable the USSR to gain territory, and destroy the German officer class meant that post-war they could try to present themselves as having been victims all along—but they had helped him get into power, supported him in power, knew about the genocides, and engaged in them.

Similarly, that Hitler did, as he said he would, disempower the churches and imprison those who resisted Nazi control of the churches means that some people now try to claim that the two major confessions—Catholic and Lutheran—resisted Hitler and Nazism. But they only resisted Nazi interference in Church power, and then only fairly late. There was criticism of the euthanasia program, and some criticism of the extermination of converted Jews, but it was little and it was late. The Church Wars were about issues of Church autonomy, not genocide. Like the officer class, many Catholic and Lutheran church officials would regret having supported Hitler (many would claim that the problem wasn’t Hitler, but Nazi administrators acting on their own initiative), but support him they did. Had the Catholic party (the Centre Party) not unanimously voted for the Enabling Act, it would not have passed.

Catholics and Lutherans were concerned about reinstating the privileges reduced by the Socialist Democrats (who believed in a separation of church and state) and the political agenda they believed was the core of being “Christian”—opposition to birth control, homosexuality, abortion, pornography.

Germans were persuaded during the Nazi regime—people came to accept and act on policies they would have balked at before 1930—but not because they heard a Hitler speech and were magically hypnotized. They did so, largely for instrumental reasons.

Culturally, our discussions of Hitler are dominated by what Ian Kershaw calls “the Hitler myth”—that he was a magically charismatic leader who overwhelmed Germans’ capacity to judge. That isn’t what happened: Germans judged, and they liked what they saw.

My point is that these four folk explanations–of Hitler, persuasion, ethical rhetoric, and good citizenship– are not just inaccurate, but are inaccurate in ways that reinforce factionalism, obstructionism, and politics as performance of in-group loyalty. Talking. more about Hitler is a way to talk about what’s wrong with those explanations.