Handout for Denver talk

“Democracy and the Rhetoric of Demagoguery”

Here’s my argument: I think we can distinguish demagoguery from other forms of persuasive discourse on the basis of the presence of certain rhetorical moves, not the identity of the rhetors. I think, also, we should talk about the effectiveness of demagoguery in terms of how it plays into the informational worlds that people inhabit. Demagoguery isn’t an identity; it’s a relationship.

There are six methodological problems to consider with the “infer from rhetors I hate” project:

1. Looking for the commonalities among successful and hated rhetors assumes what is at stake—that it was something about their rhetoric or identity that enabled them to succeed, rather than there being a tremendous amount of luck, or their being in the right place at the right time. If we want to know what does enable that success, we need to look at unsuccessful demagoguery.
2. That method doesn’t enable us to see demagoguery we like—by beginning with rhetors we hate, we exclude consideration of our attraction to potentially damaging rhetoric.
3. It also prohibits empirical research on demagoguery. And here I’m advocating a kind of research I don’t do, but that I think is valuable. If we could come up with a fairly rigorous definition of demagoguery, then we could use strategies like corpus analysis in order to be more precise in our claims of causality and consequences.
4. Oddly enough, the standard criteria—motive, emotionality, populism—don’t even capture the most famous demagogues, or they end up capturing all political figures, so those criteria are both over- and under-determining.
5. These criteria are demophobic and elitist, as though rich and intellectual people never fall for demagoguery, and that just isn’t true.
6. Finally, by focusing on identities as the problem—bad things happen because we have powerful individuals who are demagogues—we necessarily imply a policy solution of purification. If the presence of these bad people is the problem, then we should purify our community of them. Since I’ll argue that policies of purification are, in fact, one of the consistent characteristics of demagoguery, that would mean, in the scholarly project of criticizing demagogues, we’re engaged in demagoguery.

Odd characteristics of demagoguery:
1. It’s obvious to us that their rhetor is a demagogue, but not to them. If the identity of demagogue is so obvious, why does it ever work?
2. If demagogues are magicians with word wands, why is it so hard to describe their impact/effect accurately?

“Time after time, Hitler set the barbaric tone, whether in hate-filled public speeches giving him a green light to discriminatory action against Jews and other ‘enemies of the state’, or in closed addresses to Nazi functionaries or military leaders where he laid down, for example, the brutal guidelines for the occupation of Poland and for ‘Operation Barbarossa’. But there was never any shortage of willing helpers, far from being confined to party activists, ready to ‘work towards the Fuhrer’ to put the mandate into operation” (Ian Kershaw, Hitler, the Germans 43)

“Nazi propaganda was not, and could not, be crudely forced on the German people. On the contrary, it was meant to appeal to them, and to match up with everyday German understandings [….] Thus, far from forcing unwanted or repellant messages down the throats of the population, Hitler and the Nazis carefully tailored what they said, wrote, and especially what they did, in order to win and hold the support of the people.” (Robert Gellately, Backing Hitler 259)

Characteristics of public discourse in train wreck moments:

• Policy questions are reduced to questions of identity, with need reframed as threat to the ingroup, and with identify bifurcated into “us” and “them”;
• The community or nation-state is reduced to the ingroup who are seen as the “real” Americans/Christians/Republicans/Progressives (so that, even if “they” are legally or historically part of the community, they are never considered “real” members);
• An outgroup is scapegoated for all the ingroup’s problems;
• Public discourse is predominantly performance of ingroup loyalty;
• Ingroup loyalty is demonstrated by insisting that policy discussions are unnecessary because the correct course of action is obvious to all people of goodwill (disagreement is fake—either the person disagreeing doesn’t really disagree, or is fooled by the outgroup);
• The community is described as threatened by the mere presence, let alone political power, of that outgroup, and so the solution is some version of purifying us of them;
• Because we are threatened with extinction, concerns like due process, human rights, and fairness are luxuries we can’t afford;
• The discourse is heavily fallacious, but not necessarily emotional, and can involve appeals to authority and expertise, and can look as though there is a lot of “evidence;”
• Nuance, uncertainty, deliberation, and skepticism are rejected as unmanly and disloyal (except for skepticism about claims made against ingroup members);
Finally, while there are overlaps with fascism (especially as Robert Paxton describes it), it isn’t necessarily fascist, or even political—it is an attack on Enlightenment notions of reason, universal rights, and inclusive deliberation.

Damaging assumptions that people commonly make about political decisions:

• When it comes down to it, the solutions to our political problems are straightforward. Our political issues are the consequence of not having enough good people in office—instead, we have professional politicians who aren’t really trying to solve things. (Stealth Democracy)
• Good people do good things, and it’s easy to recognize when someone is a good person, or when a plan of action is good. So, we don’t need to argue about policy—we just need to vote for the good people who are above (our outside of) professional politics.
• Good people speak the truth, and they don’t try to alter it through rhetoric—they are transparent. You’re better off with someone who doesn’t filter—even if what they say is offensive or not politically correct—because you can know that person. S/he won’t mislead you.
• A “rational” argument is a claim that is true (and that you can recognize easily to be true) supported by evidence, and presented in an unemotional way.

The definition I’m proposing:

Demagoguery is a discourse that promises stability, certainty, and escape from the responsibilities of rhetoric through framing public policy in terms of the degree to which and means by which (not whether) the outgroup should be punished for the current problems of the ingroup. Public debate largely concerns three stases: group identity (who is in the ingroup, what signifies outgroup membership, and how loyal rhetors are to the ingroup); need (usually framed in terms of how evil the outgroup is); what level of punishment to enact against the outgroup (restriction of rights to extermination).

(Some) Citations:
Berlet, Chip, and Mathew N. Lyons. Right-Wing Populism in America: Too Close for Comfort. New York: Guilford, 2000.

Burke, Kenneth. “Rhetoric of Hitler’s ‘Battle.'” The Philosophy of Literary Form: Studies in Symbolic Action, 3rd ed. Berkeley: U of California P, 1978.

Gellately, Roberts. Backing Hitler. Consent and Coercion in Nazi Germany.Oxford, Oxford University Press, 2001.

Hibbing, John R. and Elizabeth Theiss-Morse. Stealth Democracy: Americans’ Beliefs about How Government Should Work. New York: Cambridge U P, 2002.

Kershaw, Ian. Hitler: 1889-1936: Hubris. New York: Norton, 1998. Print.
—. Hitler, The Germans, and The Final Solution. NewHaven: Yale University Press, 2008.

Lakoff, George. Moral Politics: How Conservatives and Liberals Think, 2nd ed. Chicago: U of Chicago P, 1996.

Mann, Michael. The Dark Side of Democracy: Explaining Ethnic Cleansing. Cambridge: Cambridge UP, 2005.

Miller, Thomas P. The Formation of College English: Rhetoric and Belles Lettres in the British
Cultural Provinces. Pittsburgh: Pittsburgh UP, 1997.

Taleb, Nicholas. Fooled by Randomness, Random House & Penguin (2001-2005 2nd Ed.)

Ward, Jason Morgan. Defending White Democracy: The Making of a Segregationist Movement and the Remaking of Racial Politics, 1936-1965. Chapel Hill: U of NC P, 2014.

Sciencing in public

As someone really worried with how badly Americans argue about public policies, I’ve especially worried about highly politicized attacks on science, and how hard it is for scientists to get pretty basic concepts understood. As a historian of public argumentation, I’m unhappily aware that the tendency to attack scientific discoveries on purely political grounds isn’t new. And a lot of people have written things about how science is attacked, and bemoaned our inability to get scientific findings to have real impact on public policy, but I think those things haven’t had much impact because of their rhetoric.

Lots of people have said that scientists’ rhetoric is flawed because it’s too technical and academic, but, honestly, I don’t think that’s the problem. I think the two major problems that vex public uses of science in public policy are: culturally, we have a vague definition of what is a “science,” and second, we have a thoroughly muddled notion of what “objectivity” is.

And scientists themselves don’t help. In public, too many scientists conflate “science” and “what I think is good science” and appeal to an inconsistent epistemology.

What people engaged in research about climate change, vaccines, evolution, and gender need to understand is that the people who attack what some of us think of as science do so by citing what they think of as science.

Behind the arguments that we think of as “science” arguments are, it seems to me, two deep misunderstandings: first, what a “science” is; second, what epistemology (model of knowledge) is right. The first one is relatively straightforward, but the second, more complicated one, is the really crucial one.

Part of the problem is that the cultural understanding of what it means to be a “science” is muddled, and, for a large number of people, simply outdated. Until well into the 20th century, various disciplines were called “sciences” that had nothing to do with what we now think of as the scientific method, insofar as they relied on non-falsifiable claims (eugenics, for instance). But they called themselves sciences and they were accepted as such because they had numbers, they had experts, and they had peer-reviewed journals. For many people, that older notion of a “science” prevails: a science is something that is done by people with degrees in fields that seem kind of science-y and have a lot of math. (Look at the oft-shared list of “scientists” who say global warming is a hoax.)

There are various organizations out there (and long have been) with very clear political agenda that call themselves “sciences” or “scientific” and manage to mimic the rhetorical moves of sciences. This, too, is nothing new. When various organizations abandoned race as a useful concept, racists formed their own organizations and journals that only published “studies” that fit their political agenda (John P. Jackson’s Segregationists for Science describes this process elegantly). Meanwhile, they railed at the mainstream journals for being politicized. They managed to look like “science” to many people because they had authors who had degrees in science, some of whom worked as “scientists.” That notion of science is an identity argument: science is the work done by people we think of as scientists.

The same thing happened when psychologists decided that homosexuality was not a mental illness—organizations formed with the political agenda of only supporting research that pathologized homosexuality (and, once again, that condemned other research as “politicized”). And they call themselves scientific organizations, with “research” prominent in their titles. There are similar organizations and webpages (and some journals) for organizations that promote Young Earth Creationism, anti-vaccine rhetoric, attacks on climate change, and all sorts of other ideologically charged issues. And, as with the pro-segregationist rhetoric, they are explicitly politicized while projecting that condemnation onto their critics. Because they are explicit that they are looking for “science” that supports beliefs they already have, one of the very straightforward ways that they are not sciences is that their claims are non-falsifiable.

They are scientific, they say, because they can generate studies and data that support their beliefs. In the case of creationism and homophobia, the groups often insist that they are proving that Scripture and “science” say the same thing. They can support their readings with data or quotes from people with degrees in science, and with scientific-sounding explanations. That’s cherry-picking, of course, but it means that they can invoke the authority of “science” to support their claims.

(And here I should probably come clean: I self-identify as Christian, and I think they cherry-pick Scripture just as much as they cherry-pick “science.”)

When I first wandered into these places, where people at odds with the scientific consensus insisted that they were doing science, I just assumed that there were being deliberately disingenuous, but I no longer think so. For me, as for many people, there is “normal science,” which is the data being produced by people publishing falsifiable studies in peer-reviewed journals. Science, furthermore, has the quality that scholars in rhetoric call “good faith argumentation,” meaning that the people putting forward a claim can imagine being presented with data that would cause them to abandon it (there are some other characteristics, but that one is the important one here). But that isn’t how everyone thinks about science–it isn’t about method, but about the identity of the person doing the work.

Young Earth Creationists, for instance, fail at every point mentioned above (except posture). They can cite data to support their claims (some of which, but not much, is true), but they can’t articulate the conditions under which they would abandon their narrative about the creation of the earth.

So, why do they continue to think of themselves as doing science?

It’s the identity argument. As I said earlier, for many people, “science” is the activity done by people who have degrees in a science field, regardless of the institution, and regardless of the discipline. So, how do they distinguish between good and bad science? Good science is true.

For them, science is a relationship to reality—if you’re a “scientist,” then you have a direct connection to the logos that God breathed into the fabric of the universe. Thus, that 700 scientists would say that global warming is false shows that people with that kind of unmediated knowledge make a claim. That faith in unmediated knowledge is often called the “naïve realist” epistemology.

That “unmediated knowledge” is crucial to all this, and it’s where scientists trip themselves up. It’s important to understand that the people arguing for young earth creation believe that they can simply look and see the truth–so any argument that says “You’re wrong, because you can simply look and see a different answer” isn’t going to work rhetorically. They are looking, and they can find evidence to support their position.

And that raises the second, fairly complicated, problem about epistemology. And scientists have issues with this, I think, because when in public they’re naive realists, and they insist you’re either a naive realist or a postmodern relativist (really? do they think creationists are postmodernists? they’re pre-modernists), but when at home they’re skeptics. Science itself rejects naive realism, so scientists need to stop talking as though there is naive realism or post-modernism. (In fact, that’s how creationists talk, which is a different post.)

A non-trivial complication in how the public argues about “science” is that what I earlier called “normal science” is often advocated by people who do and don’t claim that they have unmediated knowledge of the world. That’s a rhetorical problem. Scientists and young earth creationists (and all the other advocates of bad science out there) appeal to and reject naïve realism.

Briefly, many defenders of science in public debates make two claims simultaneously: science is indisputably true; science is better than religion because scientists change their mind when presented with new evidence—science is falsifiable. In other words, science looks true to people AND the results of scientific studies are contingent claims that could be proven false. So, as I said, in public discourse, too many scientists appeal to naive realism, but the scientific method itself rejects naive realism.

To many people, that looks as though scientists are saying that, although we’ve changed our mind a lot in the past (meaning “science” can be wrong) we are absolutely right now. Or, more bluntly: science is true but it’s been false.

And, let’s be blunt: it’s been false. Eugenics was mainstream science. It had bad methods, but it was mainstream science, and it was taught in science classes. It didn’t look bad at the time. Medicine claims to be a science, as does nutrition, and it has made a lot of claims that scientists in those fields now believe to be false.

Scientists need to reject the false binary of “you either believe that science tells us things that are obviously true” or “you are postmodernist literary critic who believes that all claims are equally true.” That is not only a falsifiable claim, but a false one. Young earth creationists are cheerfully unaffected by postmodernism anything, and they say that they believe things that are obviously true. Also, there are very few “postmodernists” who say that “all claims are equally true”–Feyerabend comes to mind, and very few others, and no, that isn’t actually what Foucault or Derrida said. (And I don’t even really like Foucault or Derrida, and I think that’s just an outrageously ignorant way to characterize what they’re saying.)

Keep in mind, Popper said that objectivity isn’t about what an individual does. A claim is objective, he said, because it’s an object in the world, and he said an objective claim isn’t necessarily true. So, since Popper said that an individual scientist isn’t necessarily objective, is he a postmodern relativist?

Good science isn’t about the cognitive processes of individuals engaged in science; it’s about the arguments people in science have. When people claim that you either believe what “science” says right now or you’re a postmodernist relativist hippy, they’re rejecting the scientific method.

The whole premise of the scientific method, especially concepts like a control group, falsifiability, and double-blind studies, is that people are prone to confirmation bias (a good study doesn’t set out to confirm a hypothesis: it sets out to falsify one). The scientific method presumes that humans’ perception is clouded. That acknowledging that individuals can’t see the truth doesn’t make the underlying epistemology either solipsistic or relativist (both of which are, oddly enough, often misnamed as postmodernism—they long predate modernism, let alone postmodernism). It means that science generally exists in the realm of skepticism, sometimes radical, sometimes the mild version that Karl Popper called fallibilism. For Popper, there is a truth out there, and it can be perceived by individuals, but individuals are fallible judges of when we have and have not reached the truth.

Science isn’t about binaries. It’s about continua. There are some claims that could, in principle, have been falsified, but have so withstood such tests that it isn’t even interesting to consider the possibility—such as evolution. There are aspects of evolution about which there is disagreement, and about which new consenses continue to form (such as the direct ancestor of homo sapiens), but all of those disagreements are subject to proof and disproof through further research. And that is the difference between evolution and creationism: religious faith, by its very nature, cannot be subject to disproof. Science is, fundamentally, a rejection of naive realism and of binaries about certainty: it says we should be skeptical about all claims, and we should think about claims in terms of how certain we are of them.

It’s no coincidence that science and skepticism arose at the same time, and, in fact, that’s the argument that scientists make about how science is different from religion: a true scientist will abandon her beliefs if the data disconfirm them, but religion is about rejecting the data if it disconfirms the beliefs.

Let me rephrase my original statement of the problem: scientists make a rhetorical claim (their claims should be granted more credence because of how they are supported), and an epistemological one (their arguments are true). I sincerely believe that science is in such a bad way right now because too many advocates of science reject what they know: that science isn’t about being certain or not, but about how certain you are, and what are the conditions under which you should change your mind.

The epistemology underlying science is a skeptical one, and scientists know that. When they’re arguing in public, they need to stop acting as though there is either naive realism or postmodern relativism. Scientists are skeptics who argue passionately for their point of view.

Right now, our political world is demagogic, and that means that our political world is dominated by the notion that there are good people who perceive the obviously correct way to do things and those assholes. We disagree about who are the assholes, but we all agree that it’s a binary.

What science could and should do for us is show a different way of thinking about thinking–that the right course of action depends on a correct understanding of the world as it is, and there is no correct understanding immediately available to us, but there are understandings that look pretty damn good, given all the research that’s been done.

 

I’m not saying that scientists need to argue better in public; while I think the whole project of sciencing in public is wonderful, I also think, ultimately, scientists aren’t obligated to be rhetoricians. (Some of them are wonderful rhetoricians, such as Steven Weinberg, but that shouldn’t be a requirement.) Instead, I think we need, as a culture, a better understanding of how knowledge isn’t a binary between certain and uncertain, but a continuum. I think, oddly enough, that the solution to our current problem of fake science isn’t really in science, but in the study of knowledge.

Among Democrats (Compromise, Purity, and Lefty Politics)

Among Democrats, there are a lot of narratives about the 2016 election, and two of them are highly factional (that is, they assume an us or them, with us being the faction of truth and beauty and them being the people who are leading us astray). One is that Clinton’s election was tanked by Bernie-bros who were all young white males too obsessed with purity to take the mature view and vote for Clinton. The other is that the DNC, an aged and moribund institution, foisted Clinton onto Dems when she was obviously the wrong candidate.

Both of those narratives are implicit calls for purity, for a Democratic Party (or left) that is unified on one policy agenda—maybe the policy agenda is a centrist one, and maybe it’s one much further left—but the agreement is that we need to become more purely something. Both narratives are empirically false (or else non-falsifiable), patronizing, and just plain offensive. In other words, both of those narratives are driven by the desire to prove that “us” is the group of truth and goodness and “them” is the group of muddled, fuddled, and probably corrupt idjits.

And, as long as the discourse on the left is which “us” is the right us, progressive politics will lose.

There isn’t actually a divide in the left—there’s a continuum. People who can be persuaded to vote Dem range from authoritarians drawn to charismatic leadership (anyone who persuades them that s/he is decisive enough to enact the obviously correct simple policies the US needs) all the way through various kinds of neoliberalism to some versions of democratic socialism. And there are all those people who can vote Dem on the basis of a single issue—abortion or gun control, for instance. When Dems insist that only one point (or small range) on that continuum is the right one, Dems lose because none of those points on the continuum has enough voters to win an election. That’s why purity wars among the Dems are devastating.

While voting Dem is actually a continuum, there are many who insist it is a binary—those whose political agenda the DNC should represent (theirs) and those whose agenda is actually destructive, whose motives are bad, and who cause Dems to lose elections (everyone else—who are compressed into one group).

Here’s what’s interesting to me. It seems to me that everyone who wants Dem candidates to win recognizes that a purity war on the left is bad, and everyone condemns it. Unhappily, being opposed to a purity war in principle and engaging one in effect are not mutually exclusive. There is a really nasty move that a lot of people make in a rhetoric of compromise—we should compromise by your taking my position—and that is what a lot of the “let’s not have a purity war” on the left seems to me to be doing. Let’s not do that. Let’s do something else.

This is about the something else that we might do.

And it’s complicated, and I might be wrong, but I think that Dems will always lose in an “us vs. them” culture because, at its heart, the Dem political agenda is about diversity and fairness, and people drawn to Dem politics tend to value fairness across groups more than loyalty to the ingroup, so any demagogic construction of ingroups and outgroups is going to alienate a lot of potential Dem voters. Sometimes voting Dem is a short-term looking out for your own group, but an awful lot of Dem voters are motivated by the hope of creating a world that includes them. I don’t think Dems will succeed if we grant the premise that Dem politics are about resisting: that only the ingroup is entitled to good things.

But we’re in a culture of demagoguery, in which politics is framed as a battle between Good and Evil, and deliberation (in which people of different points of view come together to work toward a better solution) that we’re in a world of us vs. them, how can Dems create a politics of us and them? That is our challenge.

And I want to make a suggestion about how to meet that challenge that is grounded in my understanding of what has happened in the past, not just 2016 (although that is part), but also to ancient Athens, to opponents of Andrew Jackson, to opponents of Reagan, and in the era of highly-factionalized media. I want to argue that what seem to be obviously right answers are not obvious, and possibly not even right.

 

  1. In which I watch lefties tear each other to shreds and lose an election we should have won

When I first began to pay attention to politics, and saw how murky, slow, and corrupt it all was, it seemed to me that the problem was clear: people started out with good principles, and then compromised them for short-term gains, and so, Q effing D, we should never compromise. (I saw The Candidate as a young and impressionable person.)

I could look at political issues, and see the obvious course of action. And I could see that political figures weren’t taking it. Obviously, there was something wrong with them. Perhaps they were once idealistic, perhaps they had good ideas, but they were compromising, and, obviously, they shouldn’t; they should do the right thing, not the sort of right thing.

Another obvious point was how significant political change happens: someone sets out a plan that will solve our problems, and refuses to be moved. ML King, Rosa Parks, FDR, Woodrow Wilson, John Muir, Andrew Jackson (no kidding—more about his being presented as a lefty hero below) were all people who achieved what they did because they stood by their principles.

That history was completely, totally, and thoroughly wrong, in that neither Wilson nor Jackson were the progressive heroes I thought and that all of those figures compromised a lot, but, if that’s the history you’re given then you will believe that to compromise necessarily means moving from that obviously right plan (about which you shouldn’t have compromised) to one that is much less right, and the only reason to do that would be pragmatic (aka, Machiavellian) purposes. Therefore, substantial social change and compromise are at odds, and if you want substantial social change, you have to refuse to compromise. (Again, tah fucking dah—there’s a lot of that in easy politics.)

My basic premise was that the correct course of action was obvious, and, therefore, I had to explain why political figures didn’t adopt it. Why would people compromise a policy that is obviously right? And, obviously, they had to deviate from the right course of action in order to get political buy-in from people who value things I don’t value. Or they were bad politicians in the pocket of corporate interests. (Notice how often things seemed obvious to me.)

And then Reagan got elected. Reagan lied like a rug, and yet one of the first things his fans said about him was that he was authentic. He announced his run for Presidency by saying he would support states rights at the site of one of the most notorious civil rights murders. And yet his fans would get enraged if you suggested he appealed to racism.

People loved him, regardless of his policies, his actual history, his lies. They loved his image. (It’s still the case that people admire him for things he never did.)

When he was elected, lefties went to the streets. We protested. The people protesting were ideologically diverse—New Deal Dems, people who had said that there was no difference between him and Carter, radical lefties, moderate lefties, I even saw people who told me they intended to vote for Reagan because it would make the peoples’ revolution more likely, and they were now protesting that the candidate they had supported had won.

There were more than enough people out protesting Reagan’s election to prevent his getting reelected. And, in 1980, we all agreed that he shouldn’t be reelected. Unhappily, we also all agreed that he had been elected because there was too much compromising in the Dem party, that Carter was a warmongering tool of the elite, and the mistake we made was not have a candidate who was pure enough. And, so, we agreed, the solution was for the Dems to put forward a Presidential candidate who was more pure to the obviously right values and less willing to compromise on them. We didn’t get that candidate, we didn’t get a very good candidate in fact (he was pretty boring), but his policies would have been good. And a lot of lefties refused to vote for him.

Unhappily, it turns out we disagreed as to what those obviously right values were.

In 1980, the Democratic Party was the party of unions, immigrants, non-whites, people who believe in a strong safety net, isolationists, humanitarian interventionists, pro-democracy interventionists, people who believe a strong safety net was only possible in a strong economy (what would be later be called third-way neoliberals), environmentalists, people who were critical of environmentalists, and all sorts of other ideologically diverse people.

There wasn’t a party platform on which we could all agree. To support the unions more purely would have, union reps argued, meant virulently opposing looser standards about citizenship and immigration. The anti-racist folks argued for being more inclusive about citizenship and immigration. Environmentalists wanted regulations that could cause manufacturing to move to countries with lower standards, something that would hurt unions. People who wanted no war couldn’t find common ground with people who wanted humanitarian intervention. (And so it’s interesting how conservative the 1980 platform now looks.)

Dems, at that point, four choices: reject the notion that there was a single political agenda that would unify all of its groups (that is, move to a notion of ideological and policy diversity in a party); decide that one group was the single right choice; try to find someone who pleased everyone; try to find candidates who wouldn’t offend anyone; or engage in unification through division (get people to unify on how much they hated some other group).

Mondale was the fourth, most lefties went for the second or fifth. I think we should consider the first.

At the time I was a firm believer in the second, for both good and bad reasons. And lots of other people were too. What we believed is what I have come to think of as the P Funk fallacy: if you free your mind, your ass will follow. I believed that there were principles on which all right-thinking people agree, and that those principles necessarily involve a single policy agenda. Thus, we should first agree on principles, and then our asses will follow.

Lefty politics is the grandchild of the Enlightenment. We believe in universal rights, the possibilities of argument, diversity as a positive good, the hope of a world without revenge as the basis of justice. And, perhaps, we have in our ideological DNA a gene that is not helping us—the Enlightenment is also a set of authors who shared the belief (hope?) that, as Isaiah Berlin said, all difficult questions have a single true answer. I think the hope is that, if we get our theories right—if we really understand the situation—then the correct policy will emerge.

But, there might not be a correct policy, at least not in the sense of a course of action that serves everyone equally well. An economic policy that helps creditors will hurt lenders, and vice versa.[1] In trying to figure out then what kind of economic policy we will have, we can decide we’re the party of lenders, or we’re the party of borrowers, and only support policies that help one or the other. Or, we could be the centrist party, and try to have policies that kinda sorta help everyone a little but not a lot and therefore kinda sorta hurt everyone a little but not a lot. And thereby we’re promoting policies that everyone dislikes—I think Dems have been trying that for a while, and it isn’t working. But neither is deciding that we’ll only be the party of borrowers, since borrowers require lenders who are succeeding enough to lend.

The problem with the whole model of politics being a contest between us and them is that it makes all policy discussions questions of bargaining and compromise. What’s left out is deliberation. But that’s hard to imagine in our current world of, not just identity politics, but of a submission/domination contest between two identities. And, really, that has to stop.

Blaming the left for identity politics is just another example of the right’s tendency toward projection. The Federalist Papers imagines a world in which elections are identity-based (which the Constitution’s defenders saw as preferable to faction-based voting). Since most voters could not possibly personally know any candidate for President or Senate, they should instead vote for someone they could know, and whose judgment they trusted (see, for instance, what #64 says about the electors and the Senate). That person could then know the various candidates and make an informed decision as to which of them had better judgment. So, at each step, people are voting for a person with good judgment, to whom they were delegating their own deliberative powers.

That vision quickly evaporated and was replaced by exactly what the authors of the Constitution had tried to prevent: party politics. And then, by the time of Andrew Jackson, we got a new kind of identity politics: voting for a candidate because he seems to share your identity, and, will therefore look out for people like you. His good judgment comes not from expertise, the ability to deliberate thoughtfully, or deep knowledge of history, but from his being an anti-intellectual, successful, and decisive person who cares about people like you. Through the nineteenth century, the notion of an ideal political figure shifted from someone much smarter than you are to someone not threatening to you.

 

  1. Factionalism, Andrew Jackson, and the rise of identification

The problem that everyone to the left of the hard right has is the same: that we are in a culture in which rabid factionalism on the part of various right-wing major media is normalized, and anything not rabidly right-wing is condemned as communist. Lefties should be deeply concerned about factionalism (including our own), and careful about how we try to act in such a world. There is are several clear historical lessons for Americans as to what that kind of rabid factionalism does (I’ll just talk about Athens), and a clear lesson from American history as to how we should not try to manage it (the case of Andrew Jackson).

Here’s the short version. The US, when it was founded, was an extraordinary achievement on the part of people well-versed in the histories of democracies, republics, and demagoguery. Their major concern was to make sure that the US would not be like the various republics and democracies with which they were familiar. That included the UK (which was, at that point, immersed in a binary factionalism), various Italian Republics (especially Florence and Venice), the Roman Republic, and Athens.

And Athens is an interesting case, and something about which current Americans should know more. Knowing their Thucydides (via Thomas Hobbes, a post I might write someday), the authors and defenders of the constitution knew that Athens had shot itself in the face because at a certain point (just after the Mytilenean Debate, for those of you who care), everyone in Athens thought about politics in two ways: 1) what is in it (in the short-term) for me; 2) what will enable my political party to succeed?

No one worried about “what is best for Athens” with a vision of “Athens” that included members of the other political party. So, because Athens was in a situation of rabid factionalism, you would cheerfully commit troops to a political action if you thought it would do down the other party. Military decisions were made almost entirely on factional bases.

Thucydides describes the situation. He says that city-state after city-state broke into hyper-factional politics that was almost civil war. All anyone cared about was whether their party succeeded—no one listened to the proposals of the other side with an ear to whether they were suggesting something that might actually help. In fact, being willing to listen to the other side, being able to deliberate with them, looking at an issue from various sides—all of those things were condemned as unmanly dithering. Refusing to call for the most extreme policies or suggesting moderation wasn’t a legitimate position—anyone doing that was just trying to hide that he was a coward. Only people who advocated the most extreme policies was trustworthy; anyone else wasn’t really loyal to the party and so shouldn’t be trusted. Plotting on behalf on the party was admirable, and it didn’t matter how many morals were shattered in those plots—success of the party justified any means. But people weren’t open that they were willing to violate every ethical value they claimed to have in order to have their party triumph; people cloaked their rabid factionalism in ethical and religious language while actually honoring neither. So, Thucydides says, there was a situation in which every good value was associated with your party triumphing, and every bad value associated with their not triumphing.

People worried about their party, and not their country.

We can think, why would anyone do that? And yet, we might do it. No one thought to themselves, “I wish to hurt Athens and so I will only look out for my political party.” Instead, what they probably never thought, consciously, but was the basis for every decision was that only their group was really Athenian. So, they thought (and sincerely believed) anything that promotes the interests of my group is good for Athens because only my group is really Athenian.

Michael Mann, a scholar of genocides, calls this the confusion of ethos and ethnos. The “ethos” of a country is the general culture, and the “ethnos” is one particular ethnic group. What can happen is that specific group decides that it is the real ethos, and therefore any action against other groups is protecting “the people.” They are the only “people” who count. Seeing only your class, political party, ethnic group, or religion as the real identity of the group hammers any possibility of inclusive deliberation. It is also the first step toward the restriction, disempowerment, expulsion, and sometimes extermination of the non-you. While not every instance of “only us counts” ends in mass killing, every kind of mass killing—genocide, politicide, classicide, religoicide—begins with that move.

Even ignoring the issue of the ethics of that way of thinking, it’s a bad way for a community to deliberate. But what they did think, as Thucydides says, is that anything that helped you and your party was a good thing to do, even it was something you would condemn in the other party. You might cheerfully use appeals to religion to try to justify your policies, but if other policies better helped your party, then you’d use religion to justify those policies. No principle other than party mattered.

If the other side proposed a policy, you didn’t assess whether it was a good policy, you were against it. You were especially likely to be against it if it was a good policy, since then they would gain more supporters. You would gleefully gin up a reason that troops should be sent to a losing battle and put an opposition political figure in charge—losing troops (and a battle) was great if it hurt the party.

And so Athens crashed. Hardly a surprise.

In fact, the people of Athens were dependent on each other, and no group could thrive if other groups lost battles. Us and Them thinking forgets that we are us.

At the time of the American Revolution, the British political situation was completely factionalized. We might like to admire Edmund Burke, who so eloquently defended the American colonies, but even I (an admirer of his) know that, had his party been in good with George III (they weren’t) he probably would have written just as eloquent an argument for crushing the American Revolution. The authors of the Constitution were also well aware of other historical examples that showed the fragility of republics, especially Venice (one of the longest lasting republics), Florence, and Rome.

And those were the conditions the authors of the Constitution tried to solve through the procedure of people voting for someone whose authority came from intelligence and judgment. That is, the constitution worked by having people vote, not for the President directly (since you couldn’t possibly know the President personally) but for someone you could know—a state legislator, an elector—whose judgment you could assess directly. But factions arose anyway.

The factions were somewhat different from those in either Athens or Britain. In Athens it was (more or less) the rich who wanted an oligarchy, or really a plutocracy, with the wealthy having more power than the poor, and with very little redistribution of wealth. On the other side were the non-leisured (not necessarily poor, but not very wealthy either) who wanted at least some redistribution of wealth and a lot of power-sharing. But an individual’s decision to join a particular faction was also influenced by family alliances and personal ambition. In Britain, factions were described as country versus city (wealth that came from land ownership versus industry and finance) which may or may not be accurate. As in Athens, there were other factors than just economics, and that city-country distinction might itself have been nothing more than good rhetoric to explain factions that weren’t really all that different from each other.

In the US, by the time of Andrew Jackson’s rise (the 1820s), there was some division along economic lines (agriculture vs. shipping, for instance), and some along ideological ones (Federalist vs. Antifederalist), but they didn’t give a very clean binary. There were more than two parties, and even the major parties were coalitions of people with nearly incompatible political agenda (Whigs and Democrats were both strong in the North and South, for instance). Given both the youth of the country and the large number of immigrants, there weren’t necessarily family traditions of having been in one faction or another, and there wasn’t some kind of regional distinction (the North was still predominantly agricultural, and some “Northern” states had slaves until the 1830s, so neither the agricultural/industrial nor slave/not slave distinctions provided any kind of mobilizing policy identity). There wasn’t the odd role that the monarchy played in British political factions (for years, one faction attached to the monarch, and another to the son whom the monarch hated). US factions were muckled and shapeshifting.

A disparate coalition is particularly given to intrafactional fighting, splitting, and purity wars, and so there is generally a strong desire to find what is usually called a “unification device.” The classic strategy to unify a profoundly disparate coalition is two-part: unification through finding a common enemy; cracking the other side’s coalition with a wedge issue. If a party is especially lucky, that two-part strategy is made available through one issue. And that’s what US parties did in the antebellum era, and, after trying various ones, they ended up on fear-mongering about abolitionism, with some anti-Catholicism thrown into the mix.

Antebellum media was extremely factionalized. Newspapers were simultaneously openly allied with a particular party, rabidly factional, and passionate in their condemnations of faction.

“The bitterness, the virulence, the vulgarity, and perfidy of factious warfare pervade every corner of our country;–the sanctity of the domestic hearth is still invaded;–the modesty of womanhood is still assailed…” (“Party” U.S. Telegraph, June 24, reprinted from the Sunday Morning News). The anti-Jackson Raleigh Register had the motto “Ours are the plans of fair delightful peace, unwarp’d by party rage, to live like brothers” but spent the spring and early summer of 1835 in vitriolic exchanges with the Jacksonian Standard. One letter in the exchange, for instance, begins, “The writhing, twisting and screwing–the protestation, subterfuge and unfairness and the lamentation, complaint and outcry displayed in this famous production” (Raleigh Register February 10, 1835). (From Fanatical Schemes).

For instance, a newspaper’s criticism of a political party inspired a member of that party to threaten a duel, and, once the various rituals had been enacted that enabled a duel to be avoided, the person who had threatened a duel over his political faction having been criticized said, “I regard the introduction of party politics as little less than absolute treason to the South.”

When, from about 2003 to 2009, I was working on a book about proslavery rhetoric, this characteristic—that people operating on purely factional motives condemned factionalism—was one of the characteristics that made me begin to worry about current US political discourse, since it was so true of what I was seeing in American media. The most passionately factional media have mottos like “Fair and Balanced.” I have an acquaintance who consumes nothing but the hyper-factionalized media, and he has several times told me I shouldn’t believe something not-that-media because it’s “biased.” Clearly, he doesn’t object to biased media, since that’s all he consumes. And then I noticed that’s a talking point in various ideological enclaves—you refuse to look at anything that disagrees with the information you’ve gotten from your entirely biased sources on the grounds that they are biased.

If you push them on that issue, I’ve found that consumers of that extremely factional media respond to criticisms of their factionalism (and bias) with “But the other faction does it too”—a response that only makes sense in which every question is “which faction is better” not “what behavior is right.” So, even their defense of their factionalism shows that, at the base, they think political discourse is a contest between factions, and not a place in which we should—regardless of faction—try to consider various policy options. They live and breathe within faction.

Andrew Jackson was tremendously successful in that world, partially because of his conscience-free use of the “spoils system”—in which all governmental and civil service positions were given to supporters. And Jackson didn’t particularly worry about his policies; one of his major “policy” goals was abolishing the National Bank. Scholars still argue about whether he had a coherent political or economic policy in regard to the bank; what is clear is that he didn’t articulate one, nor did his supporters. Hostility to the bank was what might be called a “mobilizing passion,” not a rationally-defended set of claims. But that passion was shared with many who had almost gut-level suspicions of big banks, monetary controls, and a strong Federal Government.

It was such a widely-shared view that Jackson’s destruction of the Bank, and its direct consequence, the Panic of 1837, couldn’t serve as a rallying point for his opposition. And Jackson’s combination of popularity, use of the spoils system (including his appointment of judges—one of whom is an ancestor of mine), and strong political party worried many reasonable people that he was trying to create a one-party state. So, even as his second term was ending, people were trying to figure out how to reduce his power, and yet they couldn’t use what was quite clearly unsound economic policies.

There were more opponents of Jackson than there were supporters, but to call them disparate is an understatement. Some were pro-Bank, but too many were anti-Bank for that issue to be useful. There were a large number of anti-Catholics (some of whom might have been Masons), and also a few anti-Masons. Jackson’s bellicose (albeit effective) handling of the Nullification Crisis had alienated many of the South Carolina politicians whom he had trounced, but their stance on the tariffs (which had catalyzed the Nullification Crisis—they were trying to  nullify tariffs) was incompatible with manufacturers in other areas.

Jacksonian Democrats played two (related) cards quite effectively—they played to racism about African Americans by supporting disenfranchisement of African-American voters and engaging in fear-mongering about free African Americans at the same time that openly embraced Irish-Catholic voters (whose right to vote was still an issue in some places). They thereby drove a wedge between two groups that might have allied (poor Irish and freed African Americans), essentially offering the gift of “whiteness” to the Irish for their political support (this story is elegantly and persuasively told in How the Irish Became White). Because politics naturally works by opposites, this made Catholicism an issue on which other parties had to take a stand, and they stood to lose large numbers of voters no matter which way they jumped. The only thing that the various anti-Jackson parties shared was that they were anti-Jackson, and it’s hard to raise a lot of ire against a white guy who does a good job of coming across as a regular guy who really cares about “normal” people. In rhetoric, that’s called “identification”—a rhetor persuades an audience that s/he and they share an identity, and persuades them that the shared identity is all the information the audience needs.[2]

Elsewhere I’ve argued that John Calhoun tried to use fear-mongering about abolitionists (who were a harmless fringe group at that point) in order to unify proslavery forces behind him. It’s a great kind of strategy—you find some kind of hobgoblin that is politically powerless but that frightens a politically powerful group, and you present yourself as the only one who can save them from that hobgoblin. Unfortunately for everyone, Calhoun’s opponents simply picked up his method and American politics began an alarmism race to see who could out-fearmonger the others and call for increasingly extreme (and irrational) gestures of loyalty to slavery. Eventually, those gestures (such as the Fugitive Slave Law, the “gag rule,” the attempt to expand slavery past the Mason-Dixon Line, and, finally, the Dred Scott decision) generated as much fear and anger about The Slave Power as proslavery rhetors were generating about abolitionists.

Reagan was much like Jackson, in that his economic policies were vague but seemed populist, and he persuaded people that he really cared about them and understood them. He was normal, and he wanted normal Americans to be at the center of America.

Trump’s situation is different in that he has never had very high approval outside of his faction, but the rabidly factionalized media ensures that he has a deliberately and wickedly misinformed faction who are willing to pivot quickly for a new posture on a political issue.

What makes the two people similar, and like Jackson, is just that they have far more opponents than they have allies, and a highly mobilized base. As long as the opposition remains internally factionalized, they win. But, at this point, all that is shared among Trump’s opponents is opposition to Trump. The impulse might be to try to do what Jackson’s opponents did, and find some issue about which to fear-monger, or to do what Reagan’s opponents did, and remain factionalized. Right now, we seem headed toward the second, and in a somewhat complicated (and genuinely well-intentioned) way.

The advice seems to be that we need to have a unified and coherent policy agenda in order to mobilize voters. And, while I agree that simply being anti-Trump isn’t enough, I don’t think the unified and coherent policy agenda strategy will work either, for several reasons. The first reason is that it is trying to solve the problem of faction through faction. The second (discussed much later) is that it grounded in a misunderstanding of how Americans vote.

 

III. Trying to solve the problems of factionalized politics by creating a more unified faction

In a healthy deliberative situation, people will consider the policy first and faction second. In a culture of demagoguery, people frame every issue as “us vs. them.” We’re in such a culture now, and the US was in such a culture in the antebellum era. And I think that culture meant that the people who wanted to deliberate—who wanted to consider various policy options, listen to various sides, think about the long-term consequences for all of us, who had a broader vision of “us” (one that included everyone affected by policy decisions), were demonized. And they are now.

And, unhappily, there are within the Democratic Party the two factionalized narratives about 2016 mentioned at the beginning. My basic argument about them is that they’re both wrong, as are a lot of narratives about 2016, insofar as they say that progressives’ winning more elections just requires… anything, or that it’s obvious that progressives need to do…. anything. What makes those narratives wrong is that they are monocausal (one thing caused our problems and/or one thing will solve them), and they rely on naive realism (the notion that the truth is obvious).

Factionalized narratives say “there are two choices, and every right-thinking person chooses this one.” Deliberative narratives say, “there are many choices, and each has to be assessed in the circumstance, and each one has to be considered in terms of the past and future.” Factionalized narratives say the right answer is obvious; deliberative narratives say it isn’t. People committed to factionalized narratives say “everyone does it.” I don’t think that’s true.

And I think the comparison to the very similar antebellum situation explains why I don’t think everyone does it. I’m not convinced that this simultaneous entirely factionalized reasoning and condemnation of faction was “true of both sides.” I didn’t read a lot of Northern newspapers from the 1830s, so I can’t say whether they were just as much engaged in doublethink regarding factionalism (it’s great and every member of the faction should do it and every member of the faction should condemn factionalism), but my reading of the Congressional Record suggests they didn’t. The book I never wrote was about how proslavery rhetors tended toward deductive reasoning (the facts on the ground must be these because that’s what my principles say they should be) on every political issue before them. The rhetors who were antislavery (or just nonproslavery) tended to reason inductively, and say that a principle must be wrong because the facts on the ground suggest so. I think that’s a research project that could be useful for thinking about our current political situation—to what extent are people holding their premises safe from disproof?

For instance, William Lloyd Garrison had a journal, The Liberator, and he also had a very specific stance on abolition. Within the community of people who believed that slavery should be abolished immediately, there were profound and passionate disagreements about whether: slaves’ engaging in self-defense violence was justified, the Constitution was neutral on slavery or actively proslavery, abolitionists should insist on immediate and full citizenship for all slaves, abolishing slavery necessarily meant full citizenship for women. Garrison had his views on those issues, which he held passionately and argued for vehemently, he was no saint (Frederick Douglass noted that Garrison was not free of racist notions), and he may not even have been right in his arguments, but his paper published full and fair arguments against his positions. He believed in his arguments so thoroughly that he was willing to read and publish arguments he thought wrong.

How much current media could withstand that test? How many citizens could be like Garrison, and read and publish arguments with which we disagree? And this isn’t even setting a high bar, since Garrison was far from perfect—in fact, he was deeply flawed. It wouldn’t be that hard to be Garrison, and yet most of us fail to meet that low bar.

Antebellum proslavery media never published anything critical of slavery, and the factionalized southern media never published anything critical of their faction. What they did is what’s called “inoculation.” The goal of this media was to become the only source of information for its faction members, and they did that through reprinting articles about the evil behavior of outgroups (even about completely fabricated non-events). The main thrust was 1) deliberation is unnecessary because all you need to know is that we’re good and they’re bad; 2) DON’T LISTEN TO THEM—here’s what they’re going to say, and it’s obviously stupid and evil; 3) there is a war on us, and anyone who doesn’t recognize that is either knowingly or unknowingly on the side of our enemies.

So, in a democracy, a lot of public discourse was about how political deliberation was not only unnecessary, but actively bad (and unmanly). And they condemned the other side by presenting bastardized versions of “the other side’s” argument, as though they knew that their position of “it’s absolutely clear” would be weakened by showing the other side in a reasonably accurate way. And this fascinates me about authoritarian discourse: there is an odd admission that authoritarian discourse relies on single-party rhetoric, that it can’t withstand argumentation. So, perhaps, what it’s claiming isn’t so obvious?

The goal of much political discourse in the antebellum era, as it was in Thucydides’ era, and as it is now, was the establishment of a single-party state. Thus, much democratic discourse was oriented toward the destruction of democracy in the name of only allowing one faction to participate in the setting of policy. Unhappily, that is the argument happening on the left. The argument—whether centrists or progressives should set the policy agenda—is profoundly and irrationally anti-democratic because it’s making the assumption is that the Democratic Party must be a single-faction party. Why make that assumption?

Arguments for policy only seem sensible when the policy seems to arise naturally from a narrative about our current situation. The two dominant purity policy solutions arise naturally from two different narratives about why we are in our current situation. So, in order to argue for a non-purity policy, I have to show what’s wrong with both purity narratives about 2016.

And, really, there are a lot of plausible explanations about the 2016 election. There are, loosely, two purity narratives: first, that Clinton lost because too many of Sanders’ supporters were fanatics who refused to be pragmatic and vote for a less than pure candidate (let’s call that fanatical group Sandersistas, and let’s call the people who promote this narrative the Clintonistas);[3] second, that Trump is President because the DNC foisted a weak milquetoast candidate on the Dems instead of an energizing progressive with a clearly populist policy agenda. But it’s worth looking at all the other narratives as well (I’ll list eight here and mention a few others along the way).

But before even going into them, it’s important to remember that Clinton won the popular vote by a large amount (that’s important for every explanation). And she was predicted as having a 95% chance of winning; the most dire polls put her chances at around 70%.

One factor to keep in mind is that a lot of Obama voters went for Trump, and the first explanation is a lot of them were motivated by sheer sexism. Second, the Right Wing Propaganda Machine had been attacking Clinton for 25 years, and if you throw enough mud, some of it sticks. Third, voter turnout. Fourth, her campaign blew it because they focused on meetings with big money donors toward the end rather than hand-clasping in battleground states because Clinton was arrogant.  Fifth, voter suppression.  The sixth explanation is millennial sexism. Seventh, there is the argument that Sanders poisoned the millennial vote.  Eighth, the DNC was wrong to go for a third-way neoliberal instead of Sanders, who would have won (a surprisingly complicated narrative, explained below).[4]

1 and 2. The first and second can be combined in that they represent simply the problems that come with a candidate who has spent a lot of time committing the crime of being a woman in public. And there is an argument that her faults in those regards are reasons she shouldn’t have gotten the Dem nomination. I sometimes hear those arguments made by people who like Clinton and her policies, and I understand the impulse behind them. I certainly met even young people who had what even they admitted was an irrational aversion to her—the research is pretty clear that it’s harder to remember that every attack on a person has been debunked than it is to have a vague cumulative semi-memory that the person is guilty. For some people, that Clinton had these liabilities was a reason that she shouldn’t get the nomination, and I think there are two versions of that argument—one seems to me reasonable (even if, ultimately, I disagreed with it) and the other is disturbingly anti-democratic.

The first is that, even if it’s through no fault of her own, Clinton was carrying unsurmountable liabilities, and therefore Democrats voting in the primaries shouldn’t vote for her. Women who have also committed Clinton’s crime often bristle at this argument, since they’ve heard it as the reason they can’t be promoted (“unfortunately, sexist men just don’t work as well with women, so you’ll never be a good manager”), given certain jobs (“juries just don’t like women lawyers”), pursue certain careers (“people just don’t trust the financial acuity of women money managers”). Their argument is that you don’t reduce sexism by pandering to it. And that’s a good argument.

But I also think it’s not unwise to think strategically about the likelihood of a candidate winning. So, while I wasn’t persuaded to vote against Clinton in the primaries on the basis of the argument that sexism and propaganda made her a bad candidate, I don’t think people who put it forward are spit from the bowels of Satan. They’re just people with whom I disagree.

The second version of this argument is more disturbing.  That argument is that the DNC should have put forward a “better” candidate. I find this disturbing because I don’t think the DNC should “put forward” any candidate. I realize that is, at least to some extent, what all organizations do—the elite in the organization try to position for election the people they think will make the best candidates—so I’m not naïve enough to think the DNC will remain absolutely neutral (and, in fact, I ranted at a lot of DNC fund raisers during the primaries because I was outraged that there were DNC-funded ads attacking Sanders). But, the absolute most the DNC should do is put its finger on the scale (and even that is problematic, discussed below)—Democrats need to elect candidates, not have them selected for us. Because Dems haven’t been doing well at the level of Governor or Senator, there weren’t a lot of possible candidates. Warren, Biden, and Booker all had reasons not to run, and other possibilities weren’t experienced enough. Thus, I reject the basis premise that the DNC should have selected any candidate for the Dems.

Third, voter turnout. Although there is some debate as to whether voter turnout cost Clinton the election, there remains a strong argument that it did. Or, at least, there’s a consensus that better turnout among nonwhite voters would have helped Clinton. But even people who agree that voter turnout would have led to a Clinton victory disagree as to what that factor means. Some people connect it to the argument below—that voter suppression was crucial in the election. Others argue that yet another reason that Dems (or the DNC) shouldn’t have gone for Clinton—she didn’t have the charisma to get people to put up with the (probably deliberate) long lines in heavily Dem polling places. Some people argue that the low voter turnout out was Sandersistas who refused to vote for Clinton (part of the narrative that they cost Dems the election) but I’ve never seen good evidence for that claim—it’s belied by the demographics of Sanderistas versus the low turnout. My impression, admittedly just from listening to (or reading) people who didn’t vote or didn’t vote for Clinton but might have, was that they believed the polls; they were certain she was going to win, and so didn’t think it was necessary for them to vote. They either didn’t vote, or engaged in a protest vote (to show the DNC that there are progressive voters). I’ll admit that, especially for people for whom voting would have required considerable sacrifice (such as taking unpaid time off work), this seems to me a reasonable attitude—95% is pretty much a sure thing for most people.

Fourth, the argument that Clinton’s campaign blew it because they focused on meetings with big money donors toward the end rather than hand-clasping in battleground states is unfortunately often connected to presenting Clinton as arrogant. And I have to say that I get twitchy when anyone uses the word “arrogant” in regard to a powerful woman (or powerful nonwhite).

It is not actually clear that Clinton did make a mistake with serious consequences in her strategies. More important, when we engage in hindsight, and consider counterfactuals (something I do in my scholarship frequently) we have to think about whether our sense that the outcome was obvious is the consequence of knowing the outcome. If you know of the dotcom crash of 2001, you can look back to various factors in 2000 and see all the evidence that it was coming, and then you can think to yourself what idiots people were for not seeing it. (You might even find quotes from some people who predicted it, and think what idiots everyone was for not listening to those geniuses). But that’s just intellectual shoulder-patting. Certainly, there was evidence of coming disaster, but there was also evidence that this was a new model of economic growth—you have to look at all the evidence people had in front of them in the moment and understand what reasons they gave for the choices they made.

To make considering counterfactual anything other than 20/20 hindsight, you have to ask: Were the choices reasonable within the context of that evidence, regardless of outcome?

Even if Clinton made the wrong decision, and there were people at the time who said that, the question should be whether she was making a decision that was obviously unreasonable in the moment, and I don’t think it was. For instance, her believing polls doesn’t make her arrogant—I think it’s reasonable for someone with her background to think she might know what she is doing. And what she was doing was believing the polls, and spending her energy getting money to throw downticket.

Had Clinton decided not to meet with big money donors and had instead worked on ensuring she won a supposedly unlosable election by on the ground campaigning, and had she won, I think the same people who are lambasting her now would be lambasting her as arrogant for just trying to get herself elected instead of raising more money for Dems generally.

I think this criticism amounts to lambasting her for having believed the polls. Since it’s a criticism I’ve heard repeated by people who themselves cited the polls as authoritative in October, I don’t find it a very interesting argument.

Fifth, Voter suppression. This is an interesting argument. There are lots of arguments that there was voter suppression, and that it was enough to flip the election. But, it’s also disputed, and there are also major sources that are silent on the issue (such as 538). There are two reasons I think it probably did happen—or at least there was a determined effort to make it happen. The GOP Noise Machine works by deflection and projection (or, more accurately, projection as deflection) and the ginned-up fear-mongering about voter fraud quacks and walks like a projection/deflection move. If it is projection/deflection, there either there was actual voter fraud—that is, interference with voting machines—or voter suppression. But that’s sheer speculation on my part.

The more plausible reason to think there was voter suppression and it was effective is that the GOP has spent so much money, time, and effort trying to make it harder for nonwhites to vote. They must think it’s effective.

The sixth and seventh are generally connected—that millennials are sexist, or Sanders otherwise ruined the election for Clinton (every once in a while someone makes the claim about Stein, but that’s rare).

Let’s start with the Clintonista explanation that Sanders is entirely to blame (and keep in mind that isn’t Clinton’s explanation). It doesn’t hold up to empirical testing. It’s generally made on the basis of several leaps of inference. The best empirical support (and it isn’t very good) for blaming Sanders’ supporters relies on equating Sanders’ supporters and millennials, and that’s a false equation.  Clinton won the popular vote, and lost by small amounts in key states. So, a good argument for Sandersistas having cost Clinton the election would show that there were enough of them in the very close states who didn’t vote for Clinton to have shifted the election. And I’ve looked for that data, and I can’t find it.

The closest is some numbers run by Brian Schaffner, who estimates that 12% of Sanders voters voted for Trump (but the number might be 6%).  In a tweet, Schaffner estimated the state levels. If those estimates are correct, then, had all of those people voted for Clinton, she would have won. (All of this is explained in John Sides’ August 24, 2017 Washington Post article, “Did Enough Bernie Sanders supporters vote for Trump to cost Clinton the election?”)

So, does that mean that Sanders supporters cost Clinton the election, or, as another article terms them, Sanders “defectors”? Note the loaded language.

This whole narrative makes me nervous, especially since it’s taking Schaffner’s work as more definitive than even he says it is. And it seems to be getting used as a weapon in the purity war rumbling around the left—Sanders voters are unreliable, likely to defect, were too self-righteous to vote sensibly, or too unwilling to compromise. It’s also getting used by people who want to argue that Dems should have gone for Sanders, since it’s proof that he would have won. (It isn’t, since Clinton picked up more than that number in GOP voters who “defected.”)

First of all, we need to stop with the language of “defecting” and even “costing.” Even Schaffner points out that the people who did that weren’t typically Democrats, and they were racist. Sanders always did worse than Clinton as far as non-whites, but his defenders argue that he was changing his message, and he would have attracted more. Had he genuinely persuaded the public that he was not racist, he would probably have lost this 12%. Schaffner’s speculation is important to note: “I think what this starts to suggest to me is that these are old holdovers from the Democratic Party that are conservative on race issues. And while Bernie wasn’t campaigning on that kind of thing, Clinton was much more forthright about courting the votes of minorities — and maybe that offended them, and then eventually pushed them out and toward Trump.”

So, these weren’t Sanders supporters, I’d say—just people who voted for him in the primaries. And they certainly don’t represent anything important about Bernie-bros, or the young progressives who want the Dems to become more progressive—this isn’t that category. In fact, Schaffner’s evidence suggest that group did vote for Clinton, or, at least, didn’t cost her the election.

It might be that the fact that Sanders’ supporters repeated a lot of fake news reports and pro-Trump talking points on social media convinced others in their feed to vote Trump or third party, but I haven’t found a study to suggest that’s the case. My highly individualistic impression is that the people who voted for Sanders in the primaries and refused to vote for Clinton were the kind that had never voted for a Dem anyway (and didn’t vote for Obama, on purity grounds), or they lived in Texas, so they don’t really count as game-changers. I know that there were people who voted for Obama and then voted for Trump, but the research doesn’t suggest that many of them were Sanders’ supporters who refused to vote for Clinton.

So, the notion that Clinton lost just because of Sandersistas doesn’t really make the grade of a falsifiable claim. It’s just a guess, and not even a very good one.

And why would we make that guess? There is much better evidence about other factors, such as voter suppression and overconfidence among Clinton supporters (who thought she had it in the bag and so they didn’t need to vote). 538 persuasively argues it was the Comey scandal and the impact on undecided voters (most of whom weren’t millennials). Why make a guess that blames fellow lefties? That seems to me unnecessarily and strategically unwise.

People tend to blame the outgroup for anything bad that happens, and, unhappily, it’s not unheard of for people to be more concerned about heretics than heathens. That is, we can be more concerned about cleansing our group of people who aren’t like-minded enough than about people who are openly opposed to us. It’s an irrational act to which people are drawn when the ingroup is shamed, and that’s what I think we’re doing. It seems to me a skirmish in a purity war.

It’s also incredibly patronizing and delegitimates a point of view—that Sanders was the better candidate—of people with whom there are shared goals.

I think this kind of move (like all skirmishes in a purity war) sets up a nasty dynamic—like two people fighting over who is at fault for burning the Thanksgiving turkey. Once a person says, “It’s your fault,” it’s incredibly difficult to get the conversation back into a useful realm in which people are problem-solving—it’s all about defending yourself.

I mentioned that I do know Sanders supporters who refused to vote for Clinton, some of whom never vote in Presidential elections (basically, any candidate popular enough to get a nomination isn’t pure enough for them—they liked that candidate when you had to buy the speech on vinyl at the show; it’s just hipster politics), but some of whom probably would have. And they live in Texas. In Texas, we are accustomed to being systematically disenfranchised, and every vote other than GOP is a symbolic action, so, although I disagree with that choice, I don’t think it’s evil or ridiculous or illegitimate or even unreasonable.

Eighth, Many people for whom I care deeply make the argument that the DNC was wrong to go for a third-way neoliberal instead of Sanders, who would definitely have won. In some versions, the argument is that the DNC pushed a lousy candidate onto the Dems and is therefore responsible.

I find it really weird that so many reasonable people make that argument without seeing how odd it is. It’s either false or nonfalsifiable (like the Clintonista narrative that blames Sandersistas). It’s also really patronizing since it delegitimates anyone who voted for Clinton.

I see this argument a lot. It necessarily has two sub-points: that Clinton only won because of DNC support, and that Sanders would have won the general election.  That first argument, although repeated a lot in certain circles, has some implications that, I think (I hope), the people making it would reject if made explicit.

Clinton won the open primaries, and Sanders won the caucuses. So, by any reckoning, Clinton got more votes than Sanders. This argument says that she did so only because the DNC supported her. That’s a really offensive argument. If Clinton only won because of the DNC support, then the underlying assumption is that all those people who voted for Clinton would have voted for Sanders if the DNC had supported him—that they would do whatever the DNC told them to do.

I want to leave that out there because I really think that people haven’t thought that one through. Is that really an argument they believe?

That argument is saying that Clinton supporters were mindless sheeple who would do whatever the DNC told them to. The narrative is that Sanders’ supporters really know how to vote and how to solve our problems, and Clinton supporters were just mindless followers who don’t really know what we need and how we should vote.

That’s patronizing, just as patronizing as Clinton’s saying that Sanders supporters were young and misled. I think it’s wrong—factually, morally, and strategically–in both cases. Clinton supporters, like Sanders supporters, had good reasons and good arguments for their point of view; neither group should be delegitimated. And the second someone argues for delegitimating the other major group in a community, they’re engaged in a purity war.

Since Sanders never did as well with nonwhites and women as Clinton, and Clinton never did as well as Sanders with young people, any narrative that says THEY didn’t have legitimate reasons for supporting their candidate is just appallingly patronizing. It has to stop.

But, let’s take it a step further. Is it clear that Sanders would have won? The poll that Sandersistas cite shows that Clinton would win. So, either it’s a bad poll, or Clinton might have been a less good choice, but not bad.

Sanders might have done better because he has the dangly bits, and so might not have been hurt by sexism, but Clinton lost white evangelical women, and there’s no reason to think Sanders would have gotten them (especially since he would have had anti-Semitism against him—a mirror image argument of the “don’t vote for Clinton because other people are sexist”), and there’s even less reason to think he would have gotten nonwhites. He still doesn’t get issues about race, after all. He still talks about “working class people” when he means “white working class.”

Antisemitism in the US is a non-trivial issue, and there has never been a candidate who wasn’t a practicing something, so there isn’t any good reason to think that he could have won over any bigots that Clinton lost. Unhappily, I think arguing that we shouldn’t have nominated Clinton because of sexism logically implies we shouldn’t have nominated Sanders because of anti-Semitism. If you’re arguing for Dems needing to pander to prejudices, then you need to be consistent in that (and there are still huge swaths of American public opinion that equates “liberal Jew” and “communist”). And that’s why I think they’re both troubling arguments.

At the time of the poll that showed that Sanders was the better candidate, there was a counter-argument that the GOP wanted Sanders to be the candidate, as they knew they could win against a Jewish socialist, and so they were holding fire. I was extremely dubious about that argument, so I spent a few hours looking at my normal Right Wing Propaganda Machine sources, and I ended up deciding it was true. It was striking that there weren’t any negative articles about Sanders after October or so of 2015. For instance, Sanders’ wife had some complicated financial dealings (personally, I don’t think they were even on the same radar as Trump), but there was no mention of them in the Noise Machine. The few articles about him were about how Clinton was victimizing him. That doesn’t mean that supporting Sanders was definitely a bad idea and anyone who did was an idiot. It just means that it’s reasonable to have supported Sanders but unreasonable to think he would definitely have won.

And here I have to emphasize the point I’m making—I think politics is very rarely capable of definitely right judgments, and it’s almost always a question of probabilities. Thus, there are a lot of positions on an issue that are reasonable, but they don’t all necessarily turn out to be right. Being reasonable doesn’t guarantee that one is right, and turning out to be wrong doesn’t mean that one’s position was unreasonable. So, I don’t think it’s obvious that Sanders would have won, but that doesn’t mean I’m certain he wouldn’t have. I do think his situation was more wobbly than many people realize.

What most of my lefty friends don’t know (since, unlike me, they are sensible enough not to wander around in the GOP Noise Machine) is that Clinton was slammed for being socialist. I saw this a lot on friends’ social media too (and still do). For instance, here’s the National Review, not even a very extreme site (not as rabidly factional as Fox, let alone hate radio): I think it would have been an issue for Sanders as a candidate—perhaps not fatal (Obama got past it)—but an issue.

And here’s another point for which I have no data other than listening to people. The evangelical right has thoroughly politicized their churches, as they did during segregation, and it’s all about abortion. Unless Sanders was going to change the Dem stance on reproductive rights (which would have lost him huge numbers of people), he would have faced opposition from them. So, again, I think it was reasonable to support Sanders in the primary on the grounds that he was most likely to win; I think it was reasonable to support Clinton on those same grounds. I think it was reasonable to be unhappy there wasn’t a third Dem candidate.

I think we’re reasonable people. The premise of democracy is that no individual or group knows what is best for the community as a whole, that a community benefits from having people passionately committed to different political agenda, that pure agreement is never possible but respectful and grudging compromise is good enough, that listening to people with whom you disagree is useful, that important political change happens slowly, and that being certain and being right aren’t the same thing. I think Democrats should value democracy. I think we agree to have at least that much democracy within our party, and that means acknowledging that difference as to which is (or was) the best candidate is perfectly fine—people might have good reasons for disagreeing.

 

  1. The mobilizing passion/policy argument

Speaking of reasonable arguments and thinking about probabilities, what are reasonable ways to go on from here and not repeat the errors of the past? The two most common arguments as to what we should do now are both, I’ll argue, reasonable. I’ll also argue that they’re probably wrong. But they aren’t obviously wrong, and I doubt they’re entirely wrong. One is that we’re losing elections because we aren’t putting forward a charismatic enough leader who inspires passionate commitment to a clear identity (what I always think of as “the Mondale problem”). The second is that the problem with the Dems in 2016 is that they didn’t have a sufficiently progressive platform of policies, and so there wasn’t a mobilizing political agenda. Therefore, we should have clearer mobilizing identity or political agenda.

I think these are reasonable arguments, but I don’t think either of them will work—I’m not sure they’re plausible (they certainly aren’t sufficient), and I’ll explain why in reverse order.

First, as to the “we just need someone with a clear progressive policy agenda,” I have to say that a lot of lefties who make that argument in my rhetorical world turn out to have no clue what policies Clinton advocated. They lived in a world of hating on Clinton throughout the election, and so remain actively misinformed about her policy agenda (and the number of them who shared links from fake news sites in October was really depressing).

A lot of lefties are political wonks, and so we assume that everyone else is equally motivated by policy issues. Unhappily, a lot of research suggests that isn’t the case. The next section relies heavily on three books: Hibbing and Theiss-Morse’s Stealth Democracy (2002), Achen and Bartels’ Democracy for Realists (2017), and Parker and Barreto’s Change They Can’t Believe In (2014). I should say, before going through the research on the issue, that I’m not as hopeless about the prospects for more policy argumentation in American public discourse as I think these authors are, and I do think that improving our politics through improving our political discourse is the most sensible long-term plan. For the short-term, however, I think it makes sense to be pragmatic about how large numbers of people make decisions about voting, and they don’t do it on the basis of deep considerations of policy—or on the basis of policy at all.

John Hibbing and Elizabeth Theiss-Morse summarize their research: people care more about process than they do about policy, and they “think about process in relatively simple terms: the influence of special interests, the cushy lifestyle of members of Congress, the bickering and selling out on principles” (13). According to Hibbing and Theiss-Morse, people believe that the right course of action on issues is obvious to people of goodwill and common sense who care about “normal” Americans: people believe that there is consensus as far as the big picture and that “a properly functioning government would just select the best way of bringing about these end goals without wasting time and needlessly exposing the people to politics” (133). Hibbing and Theiss-Morse refer to “people’s notion that any specific plan for achieving a desired goal is about as good as any other plan” (224).

A disturbing number of people believe that the correct course of action is obvious, because it looks obviously correct from their particular perspective. And I should emphasize that it isn’t just those stupid people who do it. Even lefties—even academic lefties—who emphasize the importance of perspective, teach about viewpoint epistemology, and reject naïve realism can regularly be heard at faculty meetings bemoaning the benighted administration for its obviously wrong-headed policy. In my experience, there is always a perspective from which the administration’s response is sensible. Most commonly, something that puts a great burden on my department (and my kind of department) is a policy that works tremendously well for most of the university, or for the parts of the university that the administration values more. Sometimes the bad policies are mandated by the state or federal government, or sometimes they are, I think, a misguided attempt to improve the budget situation. From my perspective, their policies look bad; from their perspective, my preferred policy looks bad.

I’m not saying that both policies are equally good, or all perspectives are equally valid, or that there is no way out of the apparent conundrum of a lot of people who all sincerely care for the university disagreeing as to what we should do. I’m saying that it’s a mistake for any of us to think that the correct course of action is obviously right to every reasonable person. I’m saying we really disagree, and that determining the best policy is complicated.

Most important, I’m saying that the tendency to dismiss disagreement and assume that complicated problems have simple solutions is widespread.

Since this depoliticizing of politics is widespread, how do people explain all the disagreement about policies? Hibbing and Theiss-Morse argue that people believe that most politicians are self-interested, and bicker so much because they are submissive to the “special interests” that donate money to them: “The people would most prefer decisions to be made by what [Hibbing and Theiss-Morse] call empathetic, non-self-interested decision-makers” (86). They quote one of the participants in their research who “said he had voted for Ross Perot in 1996 because he felt Perot’s wealth would allow him to be relatively impervious to the money that special interests dangle in front of politicians” (123).

Hibbing and Theiss-Morse are persuasive on the profoundly anti-democratic way that people perceive “special interests.” They say, “Our claim is that the people see special interests as anybody with an interest. Since government is filled with people who have interests, the people naturally come to the conclusion that it is filled with special interests.” (226)

People use the term “special interest,” according to Hibbing and Theiss-Morse, “to refer to anybody discussing an issue about which they do not care” (222).

We see ourselves as “normal” Americans, whose needs should be central to American policy, and whose problems should be solved quickly and sensibly. Were government functioning well, that’s what would happen, but it isn’t happening because the people in office put “special interests” above people like us, so we want someone who conveys compassion and care for us.[5]

That claim—that voters care more about caring and quick solutions to their problems and are neither interested in nor moved by policy deliberation—is supported by Achen and Bartels’ Democracy for Realists, which reviews years of studies in order to refute what they call the “folk theory of democracy.” That theory assumes that democracy is “rule by the people, democracy is unambiguously good, and the only possible cure for the ills of democracy is more democracy” (53).

Achen and Bartels conclude that elections don’t represent some kind of wisdom of the people, but “that election outcomes are mostly just erratic reflections of the current balance of partisan loyalties in a given political system” (16). Achen and Bartels argue that voters’ perceptions of policies—even basic facts—are largely determined by motivated reasoning (people use their powers of reason to rationalize a decision they have made for partisan reasons) or simply out of a desire “to kick the government,” even for natural disasters over which the government had no control (118). People aren’t motivated to join a party because they like the policies: “The primary sources of partisan loyalties and voting behavior, in our account, are social identities, group attachments, and myopic retrospections, not policy preferences or ideological principles” (267). By “myopic retrospections,” they mean events that happened in a very short period just before the election, for which they are punishing the incumbents.

Achen and Bartels refer to Hibbing and Theiss-Morse, and other scholars, in their conclusion that “many citizens in well-functioning democracies” don’t understand the value of opposition parties and the necessary disagreement that comes with different points of view.

They dislike the compromises that result when many different groups are free to propose alternative policies, leaving politicians to adjust their differences. Voters want ‘a real leader, not a politician,’ by which they generally mean that their ideas should be adopted and other people’s opinions disregarded, because views different from their own are obviously self-interested and erroneous. (318)

There is a right way, in other words, and it’s the way that looks right to normal people, and it’s the one that should be followed.

Michele Lamont’s The Dignity of Working Men (2000) emphasizes that many men (especially white) gain dignity from seeing themselves as disciplined, and explain their success as completely their own individual achievement—they actively resent goods (such as support of various kinds) being given to people who don’t work (see especially 132-135; this was less true of African Americans whom Lamont interviewed, who tended to emphasize the “caring” self). And, especially for white men, wealth isn’t necessarily good or bad; they don’t necessarily resent people who are more wealthy, but they do resent people with higher status who look down on them (108-109). They want to feel respected and cared about (which may explain Trump’s success with precisely the kind of voter whom many people thought would resent his problematic record with small businesses).

What all of this means is that thinking that the issue for the Dems in 2016, or the issue at the state and Congressional level, is that we haven’t articulated a compelling and thorough policy argument is almost certainly wrong. People who voted for Obama and then voted for Trump weren’t drawn by his policies, but his identity. As Achen and Bartels remind us, voters often get wrong the policies of their favorite political figures or their own party. And voters are easily maneuvered by mild shifts in wording (asking people about ACA versus asking them about Obamacare, for instance). Large numbers of voters don’t care about policies.

They care about slogans—they care about being told that the party or politician cares about them, and will throw out the bastards, drain the swamp, clean house. Large numbers of people want to be reassured that their needs and desires for themselves are the only ones that matter and will be the first priority of the party/rhetor.

And a lot of voters vote on the basis of promises the candidate can’t possibly fulfill. This isn’t just something that their ignorant supporters do. Certainly, Trump promised to do things the President can’t do without thoroughly violating the Constitution (since he was proposing to dictate Congressional and judicial policies–but both Sanders and Clinton proposed policies there was no reason to think they could get through a GOP Congress. I’m repeatedly surprised at the reactions of large numbers of people to SCOTUS decisions–many people (including smart and sensible friends) don’t seem to understand that it isn’t the job of SCOTUS to make sure that laws are “just”–it’s their job to make sure they’re constitutional.

In the early spring of 2016, I was in a hotel in Louisiana eating the fairly crummy free breakfast, and two men behind me were discussing Trump (they liked him). When they talked about how he was going to do something about all those poor people who lived off of the government, one of them said, “Well, what are you going to do? You can’t kill ‘em.” Then they got onto the subject of his plan for ISIS. One of them said, “They’re complaining that he won’t say what his plan is. But of course he can’t say what it is.” The other said, “Right, then ISIS would know it!” Trump’s promise was to develop a plan to crush and destroy ISIS within 30 days of taking office. His plan, as it turned out, was to tell the Pentagon to come up with a plan—as though that had never occurred to Obama?

What they needed was to believe he was the kind of person who could solve problems. He told them political issues are simple, and he was a straightforward person who, like Perot, couldn’t be bought—he wouldn’t genuinely represent them and their interests. And now he is saying that it turns out every single issue is complicated.

I often wonder about those two guys, and what they make of all this. If research on people drawn to simple solutions is accurate, then they’re doing one of three things: 1) rewriting history, so that they never voted for him on the grounds that he could solve things quickly and easily; 2) making an exception for his finding things complicated, and using his new admission that he was entirely and completely wrong in everything he said about politics as additional evidence of his “authenticity” and sincerity (and, since all they care about is that he sincerely cares about them, they’re good); 3) regretting voting for him, but not rethinking why they voted for him, what their assumptions were about how to think about politics.

That’s what happened with the Iraq invasion, after all. People who had supported it denied they’d ever supported it, denied it was a mistake, or blamed Bush for lying to them. They didn’t decide that their process of making a decision about the war was a mistake—they didn’t stop watching the channels that had worked them into a frenzy about Saddam Hussein’s (non) participation in 9/11 or the (non)existence of weapons of mass destruction. They didn’t stop making political decisions on the basis of hating Dems, or trusting a political figure because he seemed like someone who cared about them.

So, no, we can’t reach that sort of person with a more populist political agenda because it isn’t about the political agenda.

I think it’s also a mistake to think that, since they’re engaged in demagoguery, and it’s winning elections for them, that’s what we should do. Demagoguery, a way of approaching public discourse that makes all political issues a question of us (angels) versus them (devils) works for reactionary politics because reactionary politics is attractive to “people who fear change of any kind—especially if it threaten to undermine their way of life” (Parker and Barreto 6). Reactionary politics, according to Parker and Barreto and also Michael Mann, arises when a group is losing privileges (such as whites losing the privilege of being able to see their group as inherently superior to non-whites). Democrats played that card for years, and it worked, but now it would alienate as many people as it would win (or more). The research on “moral foundations” is pretty clear that, while loyalty to the ingroup is important for people who self-identify as conservative, fairness across groups is important for people who tend to self-identify as liberal. Any rhetoric that says “this group is entitled to more than any other group” will alienate potential liberal voters.

While there is a lot of lefty demagoguery, it’s internally alienating. That is, the presence of internal demagoguery is what makes some people very hesitant to support the Democratic Party. And now we’re back to the two narratives of 2016—both are demagoguery, and both alienate people. We need to imagine a way to move forward that doesn’t involve any one kind of lefty becoming the only legitimate lefty.

And demagoguery won’t get us there.

And that brings us to the second option: find a charismatic leader. That’s a great idea, and we should always hope that our candidates can come across as people who really care about “normal” people (with, I would hope, a broader version of “normal” than reactionary politicians present), but 1) that is only an option if there is a deep bench of Democratic governors and Senators, and 2) that still doesn’t get a reasonable balance in Congress, state legislatures, or among governors.

So, what went wrong in 2016? We had a shallow bench. There are lots of reasons for progressives’ poor showing at the state and Congressional level—low progressive voter turnout in 2010 that enabled gerrymandering, a tendency for progressive voters only to come out for the Presidency, and various other complicated things (including the success of factionalized hate media). What won’t work is something I hear a lot of progressives say: “We just need to run more progressives.” People have been saying that for a long time, and trying it for a long time, and sometimes running progressives works and sometimes it doesn’t, so there is no “just” about it.

The first thing lefty voters need to do is get out the vote at the state level. And I think we need to be very clear that we care about all kinds of voters, and lefty rhetoric about hillbillies and toothless white guys doesn’t help, so we also need to shut down classism as fast as we shut down any other kind of bigotry.

And we can’t win within the parameters of demagoguery, so we need to stop trying to play within them.

 

  1. On the Democratic Party as a strategic coalition

At the beginning, I talked about my initial perception of politics as a contest between what is obviously the right course of action and various things that other people want—because they’re selfish, wrong-headed, corrupt, misguided. Compromise made a good thing worse because it was a question of how much bad had to be accepted in order to get some good done, and it should only be done for Machiavellian purposes. I think too many lefties operate within that model.

When the refusal to compromise goes wrong, it ends up landing people in purity wars, and those are never good for people who are trying argue in favor of diversity and fairness. Purity wars can work well for authoritarians, racists, and people with what social psychologists call a “social dominance orientation,” but they don’t work well for the left.

So, simply refusing to compromise isn’t going to ensure better policies; it can ensure worse ones if, as happened under Reagan (or in Weimar Germany in 1932), the refusal to compromise means that the left is entirely excluded. Saying that refusing to compromise can be harmful isn’t to say that all compromises are good. I’m saying compromise isn’t necessarily and always good, but neither is it necessarily and always wrong. I’m saying that we should stop assuming it’s always evil, and we should stop falsely narrating effective lefty leaders as people who refused to compromise—they compromised. In fact, every effective leader on the left was excoriated in their time for having compromised too much.

The refusal to compromise comes from thinking about politics as a negotiating between right and wrong. We might instead think of politics 1) as the consequence of deliberation, not bargaining, 2) as an acknowledgement of the limitations of our own perspective, and/or 3) as a sharing of power with those people who share our goals. I think lefties would do well to think of at least some compromises as coming out of one of those three factors.

Here’s what I now think: thinking about compromise as always and necessarily wrong is bad, but neither is every compromise right. There are times when you say there is some shit you will not eat, and I am known as a difficult woman because I have refused to go along with various motions, statements, policies, and actions. I have nailed more than a few theses to a door. But I think lefties’ failure to think about compromise as anything other than distasteful realpolitik comes from, oddly enough, a less than useful way of thinking about diversity.

I think too often lefties accept the normal political discourse of thinking in terms of identity (even though we, of all people, should understand that intersectionality means that there aren’t necessary connections between a person and their politics), so we imagine that we have achieved diversity when we have a party that looks diverse—as though that’s all the diversity we need. So, we aspire to a political party that is diverse in terms of identity and univocal in terms of policy agenda. And I don’t think that’s going to work.

Instead of striving for a group that is univocal in terms of policy but diverse in terms of bodies, we need to imagine a party that is diverse in terms of what the Quakers call “concern.”

Early in the history of the Society of Friends, meetings struggled with what we would now recognize as burnout—people at meetings would speak of the need for everyone to be concerned about this and that issue, and everyone couldn’t be concerned about everything. So, there arose the notion that the Light makes itself known in different people in different ways, and that each person has a concern which is not shared with everyone. I think that’s what we on the left should do—we should be people concerned with inclusion, fairness, and reparative justice, and who are open to different visions of how those goals might manifest in moments of concern (and policy).

There are, of course, problems with calling for more diversity of ideology on the Left, including that it means cooperating with people whose views we think wrong. And so we have to figure out how much wrong we’re willing to allow. LBJ allowed Great Society money to go to corrupt Democratic machines, believing it was a necessary first step; Margaret Sanger cooperated with eugenicists, since it got her money and support; FDR compromised with segregationists in regard to the US military; Lincoln was willing to talk like a colonizationist to get elected and compromised with racists about pay for black troops. I don’t think they should have made those compromises.

There are some compromises that shouldn’t be made, and so we shouldn’t—but we should argue about what those limits are. And there may be times that we decide to compromise on purely Machiavellian grounds; I’m not ruling that out. But I am saying that lefties shouldn’t treat every disagreement as something that must be resolved with pure agreement on the outcome—that’s just a fear of difference. Lefties disagree. We really, really, really disagree. Lefties need to imagine that disagreement is useful, productive, and doesn’t always need to be resolved. We need to imagine a politics in which each of us gets something important for our well-being and none of us gets everything. And we need to stop hoping and working for a party of purity.

 

 

 

[1] If it helps one side too much, of course, then both end up losing—if interest rates are too high, no one takes out loans, and then lenders are hurt; or high interest rates might tank the economy, which can make it hard for lenders to find money to loan.

[2] It’s generally done through division—you and I are alike because we both hate them. Salespeople will often do it on big ticket sales, and con artists always use it.

[3] One sign of how factionalized a situation is is how often when I’m talking about this I have to keep saying that not all Sanders supporters are Sandersistas and not all Clinton supporters are Clintonistas. As scholars of group identity say, the more that membership in a group is important to you, the more that any criticism of any member of that group will feel like a personal attack.

[4] One of the odder arguments I sometimes hear people make is that Clinton was at fault for not motivating them—it’s the Presidency, not a hamburger; you’re responsible for making choices, and not a passive consumer of marketing. (Talk about a neoliberal model of democracy.) That argument irritates me so much I won’t even list it as a reason.

[5] While Hibbing and Theiss-Morse maintain this is not authoritarianism, because people want a direct connection to the halls of power when the government is not being appropriately responsive, I would argue that neither is it democratic (little d) in that there is no value given to deliberation or difference. And, of course, it’s how authoritarian governments arise—people give over all their power of deliberation to someone who will do it for them. When they want it back, they can’t always have it.

Privilege and perspective-shifting

It’s interesting that there is such resistance to the notion of privilege. Every human knows that privilege is a thing. I grew up in a very wealthy area, and we all knew whose parents could pull strings, get their kid a part-time job from which s/he couldn’t be fired, intimidate the principal, get rules bent. Let’s call that kid That Guy (although he wasn’t always a guy). People who grew up around rich people (even if they were rich) should be the first to acknowledge the power of privilege, since they must have had direct experience of it, but often they’re the last. And it isn’t because they secretly put hoods on at night and attend white supremacist marches.

I think there are several reasons: the stories that privileged people tell themselves about That Guy, a tendency to think in binaries, a commitment to naïve realism (and the often-connected notion that good people have good judgment), imagining self-worth and achievement in a zero-sum relation, and the impulse to hear “check your privilege” as something other than “time to listen.”

As to the first, That Guy got away with everything–he was completely tanked, totaled his car, and yet didn’t get arrested—and that obviously doesn’t apply to us. He never earned anything, and never faced consequences. And he was an asshole. People hear the observation of privilege as an accusation that we are That Guy. People think they’re being called an asshole. Self-identity is comparative—rich people can feel “poor” if they hang out with richer people, attractive people can feel unattractive, and so on. As long as there is someone with more privilege than what we have, then we can feel that we aren’t That Guy, and therefore, don’t have privilege (or none worth considering).

That impulse to consider our privilege trivial because of how it compares to someone else is connected to the tendency to think in binaries, especially a binary central to American political discourse: makers or takers (producers or parasites). You either work hard and make/produce wealth, or else you are a lazy person who takes from those who make wealth. William Jennings Bryan’s rhetoric described bankers (and people in the city) as parasitical on the real wealth production of the farmers; Father Coughlin positioned “international finance” (his dog whistle for “Jews”) as against the real producers of wealth; Paul Ryan and current toxic populist rhetoric makes public servants and anyone on assistance (unless they are Republican) as takers, with the top 1% as the makers.

People who think that you are either a maker or a taker can point to the ways they make wealth and therefore are enraged at being accused of being a taker. That Guy is a taker, but we aren’t him, so we are makers. The mistake here is the maker/taker binary. Privilege has nothing to do with whether you’re a maker or a taker, and it isn’t an accusation of anything. It certainly isn’t an accusation that the person hasn’t worked at all, nor is it an accusation of being an asshole.

The maker/taker binary is attractive because of the dominance in American culture of the “just world model” (or “just world hypothesis”): the notion that good people get good things and bad people get bad things. That model means that we can reason backwards from outcomes to identities: a person has good outcomes (makes a lot of money, is healthy, is successful) has caused those outcomes to happen by their good choices, good faith, and good identity; a person who has bad outcomes (is financially struggling, unhealthy, unsuccessful, or has been the object of crime) has caused those outcomes through their poor choices, bad attitude, or lack of faith.

To tell someone that outcomes might be influenced by conditions outside a person’s choice (such as accidents of birth) is tremendously threatening to someone who believes strongly in the just world model. It threatens their sense of justice and belief in a controllable universe. And research suggests that being faced with uncertainty means that people will resort more firmly to their sense that their group is inherently good, so a privileged person, faced with evidence that the world is unjust, is likely to want to cling more fiercely to the notion that they are part of a good group.

And, if that person has a tendency to think in binaries then to say that outcomes might be influenced by conditions of privilege will be heard as saying that outcomes are purely the consequence of privilege—no choices involved. Thinking in binaries means that a person will tend to believe “monocausal” narratives (any outcome has one and only one cause). If the milk spilled, there was one action that caused it, and we can argue about whether it was yours or mine, but it can’t have been both, let alone the consequence of various factors.[1] So, privilege either determines everything or nothing; if a person who believes in monocausal narratives can find a single thing done by agency, then their life wasn’t purely the consequence of privilege, and therefore it wasn’t at all. For someone like that, individual agency is the single cause or has no impact at all.

When people ask that we consider privilege, it isn’t substituting one monocausal narrative (everything I have achieved is purely the consequence of things I have done) with another (everything you have achieved is purely the consequence of your privilege). It’s an observation about relative advantages. A person raised speaking a language has an advantage over someone who had to learn the language as an adult. Because of our tendency to assume that fluency with language necessarily means fluency of thought, we tend to think of people who come across as native speakers as more intelligent. So, a person who learned a language as an adult has to work harder than the native speaker to get taken seriously and be heard. That isn’t to say that the native speaker didn’t work at all—it isn’t a binary. It’s about relative advantage or disadvantage.

John Scalzi has an article I like a lot for explaining privilege, and it’s interesting to see how people in the comments misunderstand his point. His argument is that being a straight white male is like rolling high in the character-establishing point in a role-playing game. You have an advantage over someone else who rolled low, in every situation, all other things being equal.

What that means is that a person who has no disabilities and grows up in a wealthy family in a stable environment and is a straight white male necessarily has advantages over a gay black female in exactly the same situation. That’s a comparison that keeps everything other than gender, sexuality, and race the same. But a large number of the critical comments changed other variables, insisting that Scalzi was wrong because a rich (variable of wealth) gay black female would have advantages over a poor (changed variable of wealth) het white male.

That’s clearly not engaging Scalzi’s argument.

He says “all other things being equal” and a large number of examples ignore that part of his argument. And, really, the two of the three most common ways I see arguments about privilege go wrong is that they introduce other variables (especially class) or they think the observation of privilege is a claim that the privileged person has done nothing at all (the maker/taker binary).

Since so much cultural and political discourse has the maker/taker binary, it’s understandable that people would force the observation about relative advantage into the maker/taker binary, but let’s be clear: that’s a misunderstanding that’s on the hearer. Saying you have privilege isn’t saying you’re That Guy. It’s saying that, in this situation, you have relative advantage.

One of my favorite studies is one you can do in any classroom. Ask students to write the letter ‘E’ on a small piece of paper in such a way that, when they put it on their forehead, it will be correct for someone looking at them. In one version of this study, half the group was given a small amount of money, and they promptly did worse on being able to imagine the perspective of anyone else. Thus, giving relatively small signals of privilege to some students can make perspective-shifting harder for them.

That task, perspective-shifting, is crucial to democracy. Communities in which people only look out for their group (or for themselves) inevitably end up in highly-factional squabbling, in which people will cheerfully hurt the overall community just in order to make sure the other side doesn’t win. Democracies thrive when everyone involved believes that our best world is the best world for people whom we dislike. Democracy depends on people looking at more than what is best for them or their group to whether we are establishing processes by which we’re willing to live. And that requires not just looking at whether this policy benefits me, as the person I am, but whether I would believe it was a good policy were I a completely different kind of person.

Privilege makes perspective-shifting less necessary, and makes it easier for us to think of our perspective as the “normal” one. If we are naïve realists (that is, if we believe that reality is absolutely apparent to us and we just have to ask ourselves if something is true in order to determine it is) then we are likely to think there is never any other perspective, or, if there is, that there is never any benefit to looking at things from that perspective since our perspective is right.

And our perspective is likely to be that we worked hard for what we have, that we earned every inch of our way, so it is likely to seem ridiculous to have someone say that we have privilege.

It’s a natural human tendency to attribute our successes to our work (and worth) and our failures to externalities. Even That Guy thinks he worked hard, and so doesn’t recognize his own privilege. Privilege isn’t a binary—it’s on a continuum; it isn’t an accusation of being a worthless taker, but an observation about relative advantage. It shouldn’t be the end of a conversation, but the beginning of one.

 

 

 

 

 

[1] It’s striking to me that people who tend toward monocausal narratives also tend to think of cause purely in terms of blame, but they aren’t the same. Perhaps, just as I was getting a glass of milk my husband requested, I was startled by the mayor having chosen to sound the tornado siren. The causes of the spilled milk might include my having an active startle reflex, the tornado, the mayor, my husband requesting a glass of milk, my decision to get him one while I’m up, perhaps whatever it is (genetics? experience?) that caused my startle reflex, but none of those factors is one it makes any sense to blame.

IV. “Decide for Peace or War:” How Hitler was normalized

This is the fourth in a series:
Introduction
Pt. I: "This collapse is due to internal infirmities in our national body corporate:" Popular science, their conspiracies, and agreement is all we need
Pt. II: "A source of unshakeable authority:" Authoritarian rhetoric
Pt. III: Immediate rhetorical background

From a September 3, 1944 tapped conversations of two Nazi Generals who were British POW, discussing when the German military should have refused to follow Hitler’s orders:

Hennecke: It should have been done in 1933 or in 1934 when things started.

Müller-Romer: No, the running of the state was still all right at that time. (From Tapping Hitler’s Generals 98)

The argument goes on for a while. Müller-Romer’s argument is that the political outcomes were just fine in 1933, and they should have waited till the political outcomes were worse. Müller-Romer says that “it wasn’t so bad before the war,” and Hennecke points out it was, that 1933 had the jailing of Hitler’s political opponents. Hennecke’s most important argument was that political processes were set in place in 1933 that virtually guaranteed horrific political outcomes eventually. Hennecke was right.

In 1933, Hitler set in place the criminalization of dissent, a propaganda machine, and a single-party state—those are governmental processes of authoritarianism. Hennecke was trying to argue, once those processes are in place, then, if the policy outcomes are bad, dissent is impossible. People have to protect, even in times when they like the policy outcomes, the processes they will need to be in place when they don’t like the policies.

Basically, anyone who took until after 1933 to realize Hitler was an authoritarian nightmare is someone who supported Hitler when it mattered. Realizing in 1939 that supporting Hitler was a mistake means that you’re thinking in terms of outcome and not process. Realizing in 1944 (as many of his generals did) that they had been backing the wrong horse is craven ambition—obviously, it’s only losing that hurt.

So, let’s assume that Hennecke was right, and 1933-34 was when the military should have tried to lead a revolt against Hitler’s dictatorship. Why didn’t they see that at the time? Whey didn’t most people?

They didn’t because Hitler, in March 1933 (and 1933 generally) was normalized. People who had fought against him now actively supported him, rationalized the violence of his supporters, insisted that he was at least better than the opposition, and believed that he was sincere in his professions of Christian faith (despite all appearances). The only group to vote against the act than enabled his dictatorship was the Social Democrats (democratic socialists; the communists would have voted against it, but they were banned or arrested). A rabidly factionalized press spun the situation as his being in control and decisive and finally doing the things that liberals had been too weak to do–such as cleansing the community of criminal elements. And those talking points were repeated by people who normalized behavior they had been condemning just months before.

People who think they would never have supported Hitler believe that they would never have supported a leader who pounded on the podium, screaming for the extermination of various races and an unwinnable war against every other industrialized country. And that’s what they think he did because, prior to 1933 (one might even argue late 1932) that was what he did. So, one way to think about the rest of this post is whether that test—I would never have supported Hitler because I would never have supported someone who advocated genocide and world war—is a good one for thinking about his March 23, 1933 speech. And the answer is that it isn’t.

The speech was part of the Nazi’s goal of establishment a one-party dictatorship, something that would be achieved in what was called “The Enabling Act.” They needed a 2/3 vote of the Reichstag, and a special election had been called for those purposes. They didn’t get 2/3, so they banned and arrested the communist leaders and declared they only needed 2/3 of the non-communist votes. That was a violation of the constitution. But, by the time Hitler spoke, they had done the math and knew the outcome.

Hitler’s speech was in the context of what Aristotle called deliberative rhetoric. There was a policy on the table, and so it would be expected that Hitler would engage in policy argumentation to support it (short version: he didn’t, and that’s important).

This was the Reichstag—the major deliberative body of Germany—and it was considering a major policy change; thus, in a healthy rhetorical community, Hitler’s speech on March 23, 1933 would have been deliberative rhetoric. He would have had to argue why the “Enabling Act” was an effective and feasible solution to real problems that would not go away on their own, and that the act would not involve solutions worse than the problem. He would have had to make that argument acknowledging the multiple policy options available, and to a community that was familiar with multiple sides and who insisted that he be fair to all those sides.

But Germany wasn’t a healthy rhetorical community. That isn’t what he did. He gave an epideictic speech, with bits of judicial. He didn’t engage in policy argumentation. Hitler’s speech has the overall structure of need/plan, but not in a policy argumentation way—it’s more like a skeezy sales pitch. Skeezy sales pitches have a rough need/plan organization, but the need is that you’re kind of a bad person and the plan part of the argument is that my product/company/election will solve that need thoroughly and completely. That rhetoric always begins by making the consumer slightly uncomfortable (insecure, ashamed, or worried), but with an implicit promise that they could be better. Pickup artists call it “negging” (“You would be pretty if you smiled”). And then the product is offered that will solve the problem; with pickup artists—and Hitler—the solution is the person. He didn’t engage any of the other parts of deliberative argument (consideration of multiple options, solvency, feasibility, unintended consequences).

Overall, Hitler’s argument was: things have been bad in so many ways, and real Germans have been consistently screwed over and ignored in our political system. The major decision-making body has been paralyzed by political infighting by professional politicians who haven’t been paying attention to the kind of people (in terms of race and religion) who are the real heart of this nation. Our relations with other countries have been completely lopsided, and we’ve been giving way more than we’ve been getting. We aren’t a warlike people, and we don’t want war, but we insist on the right to defend our interests. Liberals and communists are basically the same, in that liberalism necessarily ends up in communism. Situations are never actually complex, but people who benefit from pretending they’re complicated will say they are (teachers, experts, governmental employees, lawyers). The correct policies we should be pursuing are absolutely obvious to a person of decisive judgment—being able to figure out the right course of action doesn’t require expert knowledge or listening to people who disagree. The ideal political leader has a history of being decisive. And that person cares about normal people like you who are the real heart of Germany, and it’s easy for someone like you to know whether the leader has good judgment and cares about you—you can just tell. There is one party that supports the obviously correct course of action, and we should try to ensure that party has control of every aspect of government, and that there will be no brakes on what that party decides to do.

So, how does he do that? And why does it work?

He begins the speech with a vague reference to the proposal. It’s a proposal for shifting from a parliamentary system to a dictatorship, but he doesn’t say that. He says it’s “a law for the removal of the distress of the people and the Reich” (15). He grants that the procedure is “extraordinary” (a state of exception, so to speak), and gives “the reasons” for it, and his “reasons” are a purely need/blame argument (more appropriate for a judicial speech) that goes from the beginning till about fifteen paragraphs in (in the English—in the German, it’s about twelve), until he says, “It will be the supreme task of the National Government…”

I mentioned that Hitler’s policy solution was himself, and he sets up that solution by how he describes the problem. His argument is that Germany is undeniably in the most awful situation ever. And we are in the worst imaginable situation possible (hyperbole that makes him seem to be completely on their side—his commitment to the ingroup is extreme) for three reasons: first, the country has been led by Marxist politicians who are incompetent, deluded, just looking out for themselves, and/or actively villainous; second, the moral, political, and economic collapse of Germany “is due to internal infirmities in our national body corporate;” third, the “infirmities” of our life means that nothing is getting done because we’re in a deadlock: “the completely irreconcilable views of different individuals with regard to the terms state, society, religion, morals, family and economy give rise to differences that lead to internecine war” (16). Those last two are especially significant, in that they signify what kind of policies Hitler would enact. His argument in those two is that there are “defects” in our national life, especially views “starting from the liberalism of the last century,” that have inevitably led to this “communistic chaos” (16). There are political views, he says, that enable the “mobilization of the most primitive instincts” and end up in actual criminality. He’s equating disagreement and violent political conflict, and blaming all that on the presence in the community of a defect that will necessarily end in Soviet communism.

This whole argument of Hitler’s simultaneously promises stability—an end to disagreement and political paralysis–while ignoring that his own party was one of the major causes of the political paralysis, violence, and criminality of Weimar politics. Thus, this whole part of his argument is projection and scapegoating.

For instance, one of those “reasons” that his dictatorship is necessary is that it was the 1918 Marxist organizations that committed “a breach of the constitution” putting in place a revolution that “protected the guilty parties from the hands of the law.” These Marxists, according to Hitler, tried to justify what they did on the grounds that Germany was guilty of starting WWI.

Let’s assume, for the sake of argument, that all of his claims are true (they aren’t).

Why in the world is he even arguing about who is to blame for the loss of WWI? Even if the Weimar democracy was created by evil witches who mistreated bunnies and shoved little old ladies out of the way in crosswalks, that wouldn’t make his dictatorship a good plan. The Weimar dictatorship might have been Marxist (it wasn’t), it might have been disastrous (its major problems were Nazis and Stalinists), it might have lied about WWI (it didn’t), but even were all things true, it still wouldn’t necessarily mean that Hitler’s becoming a dictator was the right solution. It isn’t even clear that the actions of the people who put in place a democracy at the end of WWI were acting in an unconstitutional way. But it was absolutely clear that Hitler was.

He needed 2/3 of the Reichstag vote to get the Enabling Act passed, and he didn’t have that number. So, he had Marxists arrested and prevented from entering the chamber, and he decided on an interpretation of the constitution that said that, because he had prohibited their entry, their numbers didn’t count toward what amounts to quorum. (That isn’t what the constitution said.) So, Hitler’s hissy fit about what “the Marxists” did in 1918 isn’t a very accurate description of what they did, but it’s a perfectly accurate description of what he was, at that moment, doing. That accusation of unconstitutional action was projection.

His whole argument about violence and paralysis was also projection, since the violence and refusal to compromise (the cause of the paralysis) came from both the Stalinists and Nazis. Hitler’s argument is the pretty standard argument for people who think they’re totally and always right (that is, authoritarians): our problem is that you are disagreeing with me. The conflict would stop if you just agreed with me.

Hitler’s argument can be summarized in what, following Aristotle, people call an enthymeme. “My dictatorship is necessary because the Marxists are just awful.” Hitler was relying on the tendency a lot of people have to decide that a conclusion must be true if they believe the evidence is true. (It’s how most, maybe all, scams work.)

Hitler’s kind of argument takes it one step further than even skanky associational arguments go. He’s saying that, if the economic disaster of post-war Germany can be associated with Leninist-Marxists in any way, then they caused it, and therefore Hitler’s dictatorship. His argument is “My dictatorship because MARXISM!!!” (Notice the slip between Leninist-Marxism and Marxism.) That isn’t a logical argument, but associational. Even were it true that the “Marxists” were responsible for Germany’s post-war plight (as opposed to the war itself being the problem), then the “solution” isn’t necessarily Nazism. There were lots of other economic and political systems opposed to Marxism.

After all, liberal democracy is opposed to Marxism (liberal democrats are the first people up against the wall, as Marxists so charmingly say), as are democratic socialists (who accept some aspects of Marx’s critiques of capitalism, but oppose—unhappily often with their lives since Soviet Marxists call them liberals—Soviet Marxism and generally any kind of violent revolution), non-Soviet Marxism (Trotskyites, for instance), non-Marxist kinds of communism, the odd monetary model long promoted by the Catholic church, mercantilism, and even various other kinds of volkisch and reactionary groups. Nazism had a lot of opponents; it wasn’t the only choice other than Soviet Marxism.

So, what Hitler did was to scapegoat Marxists for Germany’s post-war situation, and associate every political party opposed to him with Marxists. [1]

Calling the people who instituted the Weimar Constitution “Marxist” is a deliberate smear—it’s just insisting that everyone to his left (and most were) is Marxist (a not unheard of tactic in our own era). It’s an equation he makes later in the speech, and made consistently in his rhetoric—he characterizes all forms of non-authoritarian governments as Marxist.

That’s a kind of argument that appeals to people who can’t manage uncertainty, ambiguity, or nuance and see all members of any outgroup as essentially the same. When we are in fight or flight mode, we are drawn to binaries. Something is good, or it is bad. Something is right, or it is wrong. And, since they think in binaries, people drawn to that way of thinking believe that you either believe everything is right or wrong or you believe it’s all good. [2]

Such people would really like Hitler’s speech, since he presents the situation as absolutely black and white. I said that he presents himself—not a set of policies—as the solution to their problems. He says, it is obvious what needs to be done; it is obvious that our bad situation is the consequence of politicians who were either “intentionally misleading from the start” or subject to “damnable illusions.” They were just looking out for themselves, giving people “a thousand palliatives and excuses.” They just made promises they never kept.

He doesn’t argue that his (vague) policy is the best policy choice; he’s arguing that “Marxists” caused all of Germany’s problems and concludes from that claim that his dictatorship is necessary. That’s a fallacious arguments in many ways. The logical form of Hitler’s argument is, as I mentioned, “My dictatorship is necessary because the Marxists are just awful.” Hitler’s dictatorship is in opposition to Marxism, and Marxism is bad, so his dictatorship is good. If you put that in logical terms, you have “A is necessary because not-A is bad.”

There are a lot of “not-A” out there. Were Hitler’s argument one that appealed to premises consistently, then he would also have to endorse this argument as equally logical: “Making my dog Louis a dictator is necessary because Marxism is bad.” After all, my dog Louis is also not a Marxist—he is not-A. Therefore, he would be just as great a leader as Hitler.

He wouldn’t be a great leader at all. He would mostly eat things, and demand a lot of walks. Whether he would have been a better leader than Hitler is an interesting question—he probably wouldn’t have been worse—but that wouldn’t make him a good leader. Yet, Hitler’s argument would apply as logically to Louis as it did to Hitler: after all, Louis would be a great leader because Marxists are bad doesn’t have any worse a major premise than “Hitler’s policies are good because Marxists are bad.”

And, let’s be clear: Louis is VERY opposed to any kind of Marxism.

And, really, that was Hitler’s argument, and that’s all it was. His argument wasn’t logical—he never put forward a major premise to which he held consistently. His argument was always “What I propose is good because I am good (decisive, caring about you, looking out for real Germans/Americans, not a professional politician, successful), they are bad,” and as long as he could rely on his audience not to think too hard about that major premise (“anyone who is decisive, caring about you, looking out for real Germans/American, not a professional politician, successful is proposing good policies”), then he was fine. And, I’ll point out again that Louis is very decisive, he cares about everyone, he is protective of his pack, he is not a professional politician, and he is very good at his job.

Simply looking to whether a claim has support is cognition, and I’m saying that good deliberation requires meta-cognition, that people will look at how they are arguing. And that people don’t just ask themselves whether an argument seems true to them, but whether they think how it’s being made is one they would consider good regardless of ingroup/outgroup membership.

Metacognition requires stepping back from an argument that justifies what you want to believe (what is called “motivated cognition”) to thinking about whether you would think your way of thinking is wrong if someone else did it. And that is the problem with the “I don’t care if it’s logical, I just know it’s true” line of argument. Do you endorse that kind of argument when other people make it? Only when they get to your conclusions. So, that method of making decisions (Hitler’s, by the way, and most authoritarians) is about ingroup loyalty, and it’s okay if your ingroup is magically always right, but there is always something mildly narcissistic about it, since it assumes your intuitions are perfect.

People who reason that way tend to favor people to whom they feel close, while, the whole time, they think they are being fair. Since they are unwilling to consider whether their method of reasoning is bad, they never notice when they’ve made mistakes. They sincerely believe their method of reasoning is good because it’s always worked for them. The question is: would they know if it was a bad method? Do they have a system for checking if their intuitions and feelings are bad? Yes, their method is to ask their intuitions and feelings whether their method is bad.

Albert Heim reported that Hitler had told him, “I don’t give a damn for intellect[–] intuition, instinct is the thing” (Tapping Hitler’s Generals 165). That fits with what Hitler said throughout his rhetoric—he insisted people trust him because his intuitions were so good that he could reject any expert advice that contradicted him. (Like most authoritarians, he endorsed expert advice that confirmed his views.) I like the term epistemological populism—something that “everyone” believes, even if it’s empirically false, is true because experts are just eggheads (unless they agree with you). You can appeal to the popular notion.

What the people who make that argument don’t notice is that their “common sense” is only “common” to their ingroup. Their “popular” notion (that this group is lazy, that that group is greedy) never includes all the groups who might have an opinion on the issue—when they say “everyone,” they don’t include the outgroup. It’s one of the subtle ways we delegitimate (and even dehumanize) the outgroup. When we do this, we aren’t trying to deletigimate or dehumanize them. It’s just that we take our ingroup associations and universalize them—since I think squirrels are evil, and I only hang out with people who think they are, then it will come to seem to me obviously true that “everyone” agrees that squirrels are evil. If Louis, who CLEARLY thinks squirrels are evil, runs for office, I will feel that he represents “everyone.” I can ignore the squirrels’ opinion on the issue.

If you like Louis (and, really, who doesn’t? he’s adorable) and he makes you feel good about yourself, then you will not hold him to the same standards that you hold other political figures. You will look for reasons to support him, and you will find them, (you are motivated to use your cognitive powers to justify his actions), and, so, you will think your support of him is rational since you can find examples and arguments to support your claims about him and his claims about himself.

But what you can’t find will be major premises that you will consistently endorse. Louis is great because he says he’s nice to you. The other candidate tries to be nice to you, but that’s just cynical manipulation on their part. Louis said something untrue, and so did that candidate. Louis was mistaken, but that candidate was lying.

Hitler played on that tendency brilliantly in this speech. Hitler made a set of claims his audience would like hearing: there is disorder, decay, uncertainty, and weakness. We don’t want to listen to any argument that Germany was to blame for WWI, or that we lost it, or that the Versailles Treaty wasn’t much worse than the treaty imposed on the French after the Franco-Prussian War of 1870.

What he said was, “You’re humiliated right now but you could be awesome with me as dictator.” Germans are humiliated right now but will be great once you put all power in me.” (Or, you would be pretty if you smiled.) Marxists are bad, and I am the kind of person who will impose order, end decay, never believe myself uncertainty, and will always be strong.

That claim involves the rhetorical strategy of projection. Whether Germany was at fault for the war is an interesting question (most scholars say yes, but very few say that only Germany was at fault), and whether the installation of the new constitution in 1918 was done in a constitutional way is an interesting question, but there is no doubt that Hitler’s pushing through of the Enabling Act violated the terms of the constitution. That move is called projection because it’s taking something you are doing and projecting it onto someone else—like a movie projector.

And it tends to work because it’s a particularly effective instance of the large category of fallacies involving a stasis shift (generally called fallacies of relevance). In a perfect world, we make arguments for or against policies on the basis of good reasons that can be defended in a rational-critical way (not unemotional—it’s a fallacy to think emotions are inappropriate in argumentation). But, sometimes our argument is so bad it can’t stand the exposure of argumentation, in that we can’t put forward an internally consistent argument. Saying that Louis would be a great President because squirrels are evil is a stasis shift—trying to get people to stop thinking about Louis and just focus on their hatred for squirrels.

Arguments have a stasis, a hinge point. Sometimes they have several. But it’s pretty much common knowledge in various fields that the first step in getting a conflict to be productive (marital, political, business, legal) is to make sure that the stasis (or stases) is correctly identified and people are on it. If we’re housemates, and I haven’t cleaned the litterboxes, and we have an agreement I will, then you might want the stasis to be: my violating our agreement about the litterboxes.

Let’s imagine I don’t want to clean out the litterboxes, but, really, it’s just because I don’t want to. I have made an agreement that I would, and when I made the agreement I knew it was fair and reasonable. So, even I know that I can’t put forward an argument about how tasks are divided, or who wanted a third cat and promised to clean litterboxes in order to get that cat. Were this a deliberative situation, I would be open to your arguments about the litterboxes, but let’s say I’m determined to get out of doing what I said I would do. I don’t want deliberative rhetoric. I want compliance-gaining—I just want you to comply with my end point (I don’t have to clean the litterboxes).

I will never get you to comply as long as we are on the stasis of my violating an agreement I made about the litterboxes, since that’s pretty much slam dunk for you, so I have to change the stasis.

The easiest one (and this is way too much of current political discourse) is to shift it to the stasis of which of us is a better human. If you say, “Hey, you said if we got a third cat, you’d clean the litterboxes, and we got a third cat, and you aren’t cleaning them,” I might say, “Well, you voted for Clinton in the primaries and that’s why Trump got elected,” and now we aren’t arguing about my failure to clean the litterboxes—we’re engaged in a complicated argument about the Dem primaries. I can’t win the litterbox argument, but I might win that one, and, even if I don’t, I might confuse you enough that will stop nagging me about the litterboxes.

[I might also train you to believe that talking about the litterboxes will get me on an unproductive rant about something else, and so you just don’t even raise the issue. That’s a different post, about how Hitler deliberated with his generals.]

Or, I might acknowledge that I don’t clean the litterboxes, but put the blame for my failure on you because your support of Clinton is so bad that I just can’t think about the litterboxes—that’s another way of shifting the stasis off of my weak point and onto an argument I might win.

Hitler’s argument shifts the stasis off of his weak points (whether he has pragmatic plans and just what they are) to ones he thinks he can win—that Marxists are bad, and that “real Germans” (the “volk”) are beleaguered victims of a political system that reward professional politicians for their dithering.

All that people know about Hitler’s policy is that he is abandoning democracy in favor of a single-party state that explicitly favors his party over others—the judicial system, educational system, arts, parliament, churches, science, and military will all be purified of anyone who isn’t fanatically committed to his political party.

Hitler is working on the basis of what Chaim Perelman and Lucie Olbrechts-Tyteca called “philosophical paired terms.” People who think in binaries also tend to assume that the binaries are necessarily logically chained to each other (which is why Laclau called them equivalential chains). So, for Hitler, there is a binary between “order” and “disorder” and that pair is necessarily connected to “his dictatorship” and “democracy.” Think of these terms as like the logic sections of some standardized tests that have questions like: “Tabby is to cat as pinto is to [what].” The answer is supposed to be “horse.”

Hitler’s argument is:

That chain of paired terms is what enables Hitler to get to what is actually an amazing argument for a purportedly Christian nation: that valuing fairness across groups is suicide, and part of a plot to weaken Germany.

And there’s a really interesting characteristic about this kind of argument. It’s normal for people to assume that an authoritarian state provides more order than a democratic one, and that it therefore is peaceful, but that’s an associational argument [strong father model], not an empirical or logical one. Authoritarian states take the conflict, violence, and chaos, and put them out of sight of “normal” people (which tends to get defined in increasingly small ways as time goes on). Empirically, and this was especially true in Hitler’s regime, authoritarian single-party governments have extraordinarily disorderly policies (they follow the whim of the person or people in charge), completely arbitrary applications of coercion, and they are systemically violence (think about how segregation operated in the Southern US).

But Hitler tries to equate his part with order, when the Nazis were the source of much (most?) of the disorder. The Freikorps engaged in random violence against Jews and lefties of various stripes. The Stalinist communists also engaged in violence, but there is no indication that democratic socialists, let alone liberals, relied on violence. So, the notion that Hitler’s party was opposed to violence just didn’t fit the situation, but his supporters appear to have followed it.

And they did it, I’d suggest, to the extent that they followed his associational chain. He chained various things together through association—order, authority, control, honor, true German identity, purification, peace, trust in him. He also throws in there victim/villain.

Logically, Nazis are not pure victims of violence. They were, in fact, murderers, thugs, and extortionists, but they were tolerated because the police and judges generally liked them (since their violence was against Jews and liberals). They got caught out in sheer murder (of Konrad Piezuch), and Hitler’s stance was that Nazi violence was always already self-defense. And Hitler’s chain of connections enabled him to connect Nazis to victims of violence. A reasonable description of the situation would have made Nazis mostly villains but also victims. Once you have a culture (or argument) that is only going to reason through paired terms, then Nazis are either victims or villains (in that world, you can’t be both). Since Nazis are connected to order, and order is opposed to violence (assertions Hitler made elsewhere in his argument) then, by the time he gets to Nazi murderers, it would seem “logical” to see them as opposed to villains (communists) so they MUST be villains.

And Hitler did sound more reasonable than he had in his beerhall speeches. He never said the word “Jew,” and only mentioned race twice. He didn’t say anything about Aryans, and talked a lot about the “volk.” For many people, the term simply meant “the people,” but for people steeped in the long and racist “volkish” literature, it meant the racial group that constituted true Germans. So, it was a dog whistle, unheard by many, but whistling up racism in others. Hitler used other racist dog whistles–he talked about decay, infirmities, the need to detoxify our public life, the “moral purging of the body corporate.” He called for greater spiritual unanimity, and ensuring that all art and culture would “regard our great past with thankful admiration” (19, emphasis added), so “blood and race will once more become the source of artistic intuition.” Someone who wanted to see him as a person who had changed (or who had never meant the racism) could point to the apparent absence of racism; someone who wanted to see him as the beerhall demagogue who would purify Germany of unwanted races could see him as someone who hadn’t changed.

But, or perhaps and, Hitler’s speech made a lot of promises that a lot of people who really wanted an end to the uncertainty of Weimar Germany politics would like to hear. The bulk of Hitler’s speech (where the plan should be laid out) is a series of vague assurances regarding the churches, the judiciary, economics (including his policies toward agriculture, the unemployed, and the middle classes, self-sufficiency), and foreign policy.

Those promises are:

  • Church. He calls for a “really profound revival of religious life,” implies he will not compromise with “atheistic organizations” and suggests that he believes religion is the basis of “general moral basic values.” He says his government “regard[s] the two Christian confessions [Catholic and Lutheran] as the weightiest factors for the maintenance of our nationality” and promises “their rights are not to be infringed” (20). He says the government will had “an attitude of objective justice” toward other religions, something Catholics and Lutherans would like hearing—that he connects the nation and their religion and doesn’t intend to put “other religions” on an equal footing with them (his audience would probably think immediately of Judaism, and possibly Jehovah’s Witnesses). Since Hitler was not himself a particularly religious person, and his organization had a lot of people in it openly hostile to Christianity, this alliance of his party with the two most powerful religious organizations would be reassuring, and it did seem to be persuasive (the Catholic political group voted for the Enabling Act).
  • Judiciary. Hitler was clear that he wanted a factionalized judiciary that didn’t respect the rights of all individuals equally (an Enlightenment value). The judicial system should, he said, make “not the individual but the nation as a whole alone the centre.” For him, the nation is the “volk” (discussed below), and judges should always put the concerns of the volk first—not abstract principles of due process.
  • Economics. Here Hitler was especially vague (which is saying something, considering how vague the whole speech is). He said he the government will protect the economic interests of “the German people” by “an economic bureaucracy to be organized by the state, but by the utmost furtherance of private initiative and by the recognition of the rights of property.” This was a clever apparent disavowal of the socialism that was central to Nazism in its beginnings, but one that wouldn’t alienate those people in the party who thought Hitler was still socialist (he would later have them killed).

He insisted on the importance of German agriculture, promised to use the unemployed to help production, told the middle classes that “I feel myself allied with them” (classic scam artist claim since he was actually a millionaire who didn’t pay taxes, and his policies wouldn’t help the middles class—it’s one of only two times he used the first person in the speech, which is rhetorically interesting), admitted that pure self-sufficiency was not possible, and then slowly moved into the more bellicose aspect of his speech.

When talking about the debt, he presented his stance as reasonable, in that he was simply insisting on fairness, a theme he drew into discussions of foreign policy. In the English translation, this section and the next (pages 22-23) have italicized text, in which he takes a strong stand toward other countries claiming that Germany’s policies were forced on them by the unreasonable behavior of other countries. And that theme leads him to what appears to be an absolutely clear statement of his policy.

For the Overcoming of the Economic Catastrophe

three things are necessary:–

  1. absolutely authoritative leadership in internal affairs, in order to create confidence in the stability of conditions;

  2. the securing of peace by the great nations for a long time to come, with a view to restoring the confidence of the nations in each other;

  3. the final victory of the principles of commonsense in the organization and conduct of business, and also a general release from reparations and impossible liabilities for debts and interest. (24)

People often mistake a set of assertions presented in what rhetors call “the plain style” with “a clear argument.” They aren’t the same thing at all, or even necessarily connected. A statement of Hitler’s policies would explain how authoritative leadership will create confidence—he’s got an associational argument, not logical one. An incompetent authoritative leadership (one that starts a war, for instance, or engages in kleptocracy) won’t necessarily stabilize conditions, and stable conditions won’t solve the worldwide depression. That’s a clear statement of a vague policy.

The second is simply a lie, but a comforting one, since Hitler’s previous rhetoric had been so war-mongering—that clear statement of a vague policy would make gullible people feel that Hitler’s previous rhetoric had just been to mobilize his base, or perhaps the responsibilities of leadership had sobered him. And, even did he actually mean it (he didn’t), Germany’s economic situation wasn’t the consequence of concern about war.

People love to hear that leaders will now act on common sense. We like to believe that our views are shared by all reasonable people, that the solutions to our problems are obvious, and that experts and eggheads should just be ignored in favor of what regular people believe. Appealing to his audience’s “common sense” also enables Hitler to sneak past the rhetorical obligation of saying what policies exactly he’ll pursue—a sympathetic person will believe he has, since they will now offer their own notions of common sense in the place of the policies he hasn’t mentioned.

Hitler promises he can achieve all these things, but not if “doubt were to arise among the people as to the stability of the new regime”—one of the ways he tugs on that set of chained terms. Stability and peace are linked, and in opposition to democratic deliberation. So, he says, he will continue to respect the Reichstag, but they won’t meet.

There is a jaw-dropping instance of strategic misnaming in his penultimate paragraph. He says (and it’s italics in the English): “Hardly ever has a revolution on such a large scale been carried out in so disciplined and bloodless a fashion as the renaissance of the German people in the last few weeks” (26). In fact, the violence of the previous weeks was unparalleled. As Richard Evans says, after January 30, when the Interior Ministry ordered that police no longer provide protection for opposition meetings, “Nazi stormtroopers could now beat up and murder Communists and Social Democrats with impunity” (320). As Evans says, in January, the Nazis “unleashed a campaign of political violence and terror that dwarfed anything seen so far” (317). Hitler is simply insisting on his version of truth—that his audience would know it to be inaccurate wouldn’t change their perception of it as “true” (that is, truly loyal to the group—what is called a “blue lie“), and it would make them see him as strong. And then we get the second time he uses the first person—having just uttered a blazing lie, he says, “It is my will and firm intention to see to it that this peaceful development continues in future” (26).

That sentence is so rhetorically brilliant that it is chilling. He is simultaneously threatening violence, renaming violence “peaceful,” and, because he’s claimed there wasn’t violence, giving himself plausible deniability. The dogs all perk up their ears at that very loud whistle, and the ministers of the Reichstag know that he is telling them either support the Enabling Act, or there will be civil war.

And he ends his speech with saying, “It is for you, Gentlemen, now to decide for peace or war.” And they did. They decided for war—one that would claim to be a war bringing world peace by exterminating difference.

In 1933, Hitler gained enough legitimacy to put in place authoritarian processes because 1) he managed to look enough less demagogic when arguing for the Enabling Act than he had during the previous years to make people think he had changed (or the demagoguery was all an act); 2) in the speech defending the act he promised a political agenda a lot of conservatives and reactionaries supported (ending the chaos of Weimar Germany, getting better deals in terms of treaties and agreements than the weak previous governments had gotten, protecting Catholicism and Lutheranism, protecting normal people, preserving peace, building the German economy, and just generally his being decisive, he also promised—in dog whistles—to purify Germany of immigrants and Jews); 3) appearing to be a better choice than Soviet communism (since all liberalism is communism); 4) the Catholics and Lutherans decided their political agenda was more likely to get enacted with him, and he promised to support them, although he’d never been a particularly good Christian prior to his election; 5) the political situation seemed to be simultaneously chaotic and paralyzed, and many people said it was because people like them had made bad choices, but Hitler said people like them were awesome and had never made bad choices and it was just evil politicians, and he wasn’t one, so they should trust him. (This point ignored that Hitler and his party had been crucial in making sure that democracy didn’t work.)

The whole “this person isn’t Hitler because I’d know Hitler” assumes that the Hitler of 1933 was a strikingly abnormal rhetor, and, certainly, Hitler’s rhetoric could be abnormal. When my students read Mein Kampf, they complain that he manages to be boring, enraging, and incoherent at the same time, and it’s an odd achievement for a text to do all three simultaneously—you’d think something enraging would at least manage not to be boring. Once we were using an online version that had skipped a page, and it took us a while to notice because the page jump made his argument only slightly more disconnected than usual. As mentioned earlier, the basic themes in Hitler’s rhetoric weren’t unique to him, and many Germans would have been consuming the same racist and militaristic rhetoric (even the lebensraum notion), but it was at least somewhat abnormal for a rhetor with major political ambitions to be so explicit and frothing at the mouth about them. But he was only that open until he was Chancellor.

So, the question of “Is this person just like Hitler?” generally appeals to a cartoon understanding of who “Hitler” was. It’s the wrong question. The question is whether they would have supported a leader who said: things have been bad in so many ways, and real Americans have been consistently screwed over and ignored in our political system. The major decision-making body has been paralyzed by political infighting by professional politicians who haven’t been paying attention to the kind of people (in terms of race and religion) who are the real heart of this nation. Our relations with other countries have been completely lopsided, and we’ve been giving way more than we’ve been getting. We aren’t a warlike people, and we don’t want war, but we insist on the right to defend our interests. Liberals and communists are basically the same, in that liberalism necessarily ends up in communism. Situations are never actually complex, but people who benefit from pretending they’re complicated will say they are (teachers, experts, governmental employees, lawyers). The correct policies we should be pursuing are absolutely obvious to a person of decisive judgment—being able to figure out the right course of action doesn’t require expert knowledge or listening to people who disagree. The ideal political leader has a history of being decisive. And that person cares about normal people like you who are the real heart of America, and it’s easy for someone like you to know whether the leader has good judgment and cares about you—you can just tell. There is one party that supports the obviously correct course of action, and we should try to ensure that party has control of every aspect of government, and that there will be no brakes on what that party decides to do.

If you would support someone making that argument, then Congratulations! You just endorsed Hitler’s argument in his March 23, 1933 speech!

 

[1] Again, not unheard of in our own time, and it’s done by people who get their panties in a bunch if anyone connects reactionary politics with other instances of reactionary politics—such as pointing out a possible connection between the SBC stance on gay marriage and its stance on segregation, or, perhaps, its formation and the connection to proslavery rhetoric. And, no, I’m not saying that everyone who now supports the SBC supports slavery. What I am saying is that the SBC has consistently gotten it wrong in regard to issues of race, and so maybe their exegetical method is flawed. If they keep getting an outcome that they later regret, maybe there is a process problem.

[2]They don’t live their lives that way, a point pursued elsewhere at greater length, but here I’ll just say that they will say something like “murder is wrong” and then have all sorts of exceptions and complicated cases. They manage to get dressed for work without being certain what the weather will be, and to pick a new show to watch without being certain they will like it (often, they just refuse to acknowledge the uncertainty).

How not to make a Hitler analogy

Americans love the Hitler analogy, the claim that their political leader is just like Hitler. And it’s almost always very badly done—their leader (let’s call him Chester) is just like Hitler because…. and then you get trivial characteristics, such as characteristics that don’t distinguish either Hitler or Chester from most political leaders (they were both charismatic, they used Executive Orders), or that flatten the characteristics that made Hitler extraordinary (Hitler was conservative). That process all starts with deciding that Chester is evil, and Hitler is evil, and then looking for any ways that Chester is like Hitler. So, for instance, in the Obama is Hitler analogy, the argument was that Obama was charismatic, he had followers who loved him, he was clearly evil (to the person making the comparison–I’ll come back to that), and he maneuvered to get his way.

Bush was Hitler because he was charismatic, he had followers who loved him, he was clearly evil (to the people making the comparison), and he used his political powers to get his way. And, in fact, every effective political figure fits those criteria in that someone thought they were clearly evil: Lincoln, Washington, Jefferson, FDR, Reagan, Bush, and Trump, for instance.

He was clearly evil. In the case of Hitler it means he killed six million Jews; in the case of Obama it means he tried to reduce abortions in a way that some people didn’t like (he didn’t support simply outlawing them), in the case of Bush it was that he invaded Iraq, for Lincoln it was that he tried to end slavery, and so on. In other words, in the case of Hitler, every reasonable person agrees that the policies he adopted six or seven years into his time as Chancellor were evil. But not everyone who wants to reduce abortions to the medically necessary agrees that Obama’s policies were evil, and not everyone who wants peace in the middle East agrees that Bush was evil.

So, what does it mean to decide a political leader is evil?

For instance, people who condemned Obama as evil often did so on grounds that would make Eisenhower and Nixon evil (support for the EPA, heavy funding for infrastructure, high corporate taxes, a social safety net that included some version of Medicare, secular public education), and many of which would make Eisenhower, Nixon, Reagan, and the first Bush evil (faith in social mobility, protection of public lands, promoting accurate science education, support for the arts, an independent judiciary, funding for infrastructure, good relations with other countries, the virtues of compromise). So, were the people condemning Obama as evil doing so on grounds that would cause them to condemn GOP figures as evil? No—their standards didn’t apply to figures they liked. It just a way of saying he wasn’t GOP.

Every political figure has some group of people who sincerely believe that leader is obviously evil. And every political figure who gets to be President has mastered the arts of being charismatic (not every one gets power from charismatic leadership, but that’s a different post), compromising, manipulating, engaging followers. So, is every political leader just like Hitler?

Unhappily, we’re in a situation in which people make the Hitler analogy to everyone else in their informational cave, and the people in that cave think it’s obviously a great analogy. Since we’re in a culture of demagoguery in which every disagreement is a question of good (our political party) or evil (their political party), any effective political figure of theirs is Hitler.

We’re in a culture in which a lot of media says, relentlessly, that all political choices are between a policy agenda that is obviously good and a policy agenda that is obviously evil, and, therefore, nothing other than the complete triumph of our political agenda is good. That’s demagoguery.

The claim that He was clearly evil is important because it raises the question of how we decide whether something is true or not. And that is the question in a democracy. The basic principle of a democracy is that there is a kind of common sense, that most people make decisions about politics in a reasonable manner, and that we all benefit because we get policies that are the result of the input of different points of view. Democracy is a politics of disagreement. But, if some people are supporting a profoundly anti-democratic leader, who will use the power of government to silence and oppress, then we need to be very worried. So the question of whether we are democratically electing someone who will, in fact, make our government an authoritarian one-party state is important. But, how do you know that your perception that this leader is just like Hitler is reasonable? What is your “truth test” for that claim?

 

  1. Truth tests, certainty, and knowledge as a binary

Talking about better and worse Hitler analogies requires a long digression into truth tests and certainty for two reasons. First, the tendency to perceive their effective political leaders as evil because their policies are completely evil is based on and reinforces the tendency to think of political questions as between obvious good and obvious evil, and that perception is reinforced by and reinforces what I’ll explain as the two-part simple truth test (does this fit with what I already believe, and do reliable authorities say this claim is true). Second, believing that all beliefs and claims can be divided into obvious binaries (you are certain or clueless, something is right or wrong, a claim is true or false, there is order or chaos) correlates strongly to authoritarianism, and one of the most important qualities of Hitler was that he was authoritarian (and that’s where a lot of these analogies fail—neither Obama nor Bush were authoritarians).

And so, ultimately, as the ancient Greeks realized, any discussion about democracy quickly gets to the question of how common people make decisions as to whether various claims are true or false. Democracies fail or thrive on the single point of how people assess truth. If people believe that only their political faction has the truth and every other political faction is evil, then democracies collapse and we have an authoritarian leader. Hitlers arise when people abandon democratic deliberation.

That’s the most important point about Hitler: leaders like Hitler come about because we decide that diversity of opinion weakens our country and is unnecessary.

The notion that authoritarian governments arise from assumptions about how people argue might seem counterintuitive, since that seems like some kind of pedantic question only interesting to eggheads (not what you believe but how you believe beliefs work) and therefore off the point. But, actually, it is the point—democracies turn into authoritarian systems under some circumstances and thrive under others, and it all depends on what is seen as the most sensible way to assess whether a claim is true or not. The difference between democracy and authoritarianism is that practice of testing claims—truth tests.

For instance, some sources say that Chester is just like Hitler, and other sources say that Hubert it just like Hitler. How do you decide which claim is true?

One truth test is simple, and it has two parts: does perceiving Chester as just like Hitler fit with what you already believe? do sources you think are authorities tell you that Chester is just like Hitler? Let’s call this the simple two-part truth test, and the people who use it are simple truth-testers.

Sometimes it looks as though is a third (but it’s really just the first reworded): can I find evidence to show that Chester is just like Hitler?

For many people, if they can confirm a claim through those three tests (does it fit what I believe, do authorities I trust say that, can I find confirming evidence), then they believe the claim is rational.

(Spoiler alert: it isn’t.)

That third question is really just the same as the first two. If you believe something—anything, in fact—then you can always find evidence to support it. If you are really interested in knowing whether your beliefs are valid, then you shouldn’t look to see whether there is evidence to support what you believe; you should look to see whether there is evidence that you’re wrong. If you believe that someone is mad at you, you can find a lot of evidence to support that belief—if they’re being nice, they’re being too nice; if they’re quiet, they’re thinking about how angry they are with you. You need to think about what evidence you would believe to persuade you they aren’t mad. (If there is none, then it isn’t a rational belief.) So, those three questions are two: does a claim (or political figure) confirm what I believe; do the authorities I trust confirm this claim (or political figure)?

Behind those two questions is a background issue of what decisions look like. Imagine that you’re getting your hair cut, and the stylist says you have to choose between shaving your head or not cutting your hair at all—how do you decide whether that person is giving you good advice?

And behind that is the question of whether it’s a binary decision—how many choices to you have? Is the stylist open to other options? Do you have other options? Once the stylist has persuaded you that you either do nothing to your hair or shave it, then all he has to do is explain what’s wrong with doing nothing. And you’re trapped by a logical fallacy, because leaving your hair alone might be a mistake, but that doesn’t actually mean that shaving your head is a good choice. People who can’t argue for their policy like the fallacy of the false division (the either/or fallacy) because it hides the fact that they can’t persuade you of the virtues of their policy.

The more that you believe every choice is between two absolutely different extremes, the more likely it is that you’ll be drawn to political leaders, parties, and media outlets that divide everything into absolutely good and absolutely bad.

It’s no coincidence that people who believe that the simple truth test is all you need also insist (sometimes in all caps) that anyone who says otherwise is a hippy dippy postmodernist. For many people, there is an absolute binary in everything, including how to look at the world—you can look and make a judgment easily and clearly or else you’re saying that any kind of knowledge at all is impossible. And what you see is true, obviously, so anyone who says that judgment is vexed, flawed, and complicated is a dithering weeny. They say that, for a person of clear judgment, the right course of action in all cases is obvious and clear. It’s always black (bad) or white (good, and what they see). Truth tests are simple, they say.

In fact, even the people who insist that the truth is always obvious and it’s all black or white go through their day in shades of grey. Imagine that you’re a simple truth tester. You’re sitting at your computer and you want an ‘e’ to appear on your screen, so you hit the ‘e’ key. And the ‘e’ doesn’t appear. Since you believe in certainty, and you did not get the certain answer you predicted, are you now a hippy-dippy relativist postmodernist (had I worlds enough and time I’d explain why that term is incredibly sloppy and just plain wrong) who is clueless? Are you paralyzed by indecision? Do you now believe that all keys can do whatever they want and there is no right or wrong when it comes to keys?

No, you decide you didn’t really hit the ‘e’ or your key is gummed up or autocorrect did something weird. When you hit the ‘e’ key, you can’t be absolutely and perfectly certain that the ‘e’ will appear, but that’s probably what will happen, and if it doesn’t you aren’t in some swamp of postmodern relativism and lack of judgment.

Your experience typing shows that the binary promoted by a lot of media between absolutely certainty and hippy dippy relativism is a sloppy social construct. They want you to believe it, but your experience of typing, or making any other decision, shows it’s a false binary. You hit ‘e’ key, and you’re pretty near certain that an ‘e’ will appear. But you also know it might not, and you won’t collapse into some pile of cold sweat of clueless relativism if it doesn’t. You’ll clean your keyboard.

It’s the same situation with voting for someone, marrying someone, buying a new car, making dinner, painting a room. You can feel certain in the moment that you’re making the right decision, but any honest person has to admit that there are lots of times we felt totally and absolutely certain and turned out to have been mistaken. Feeling certain and being right aren’t the same thing.

That isn’t to say that the hippy-dippy relativists are right and all views are equally valid and there is no right or wrong—it’s to say that the binary between “the right answer is always obviously clear” and hippy-dippy relativism is wrong. For instance, in terms of the assertion that many people make that the distinction between right and wrong is absolutely obvious: is killing someone else right or wrong? Everyone answers that it depends. So, does that mean we’re all people with no moral compass? No, it means the moral compass is complicated, and takes thought, but it isn’t hopeless.

Our world is not divided into being absolutely certain and being lost in clueless hippy dippy relativism. But, and this is important, that is the black and white world described by a lot of media—if you don’t accept their truth, then you’re advocating clueless postmodern relativism. What those media say is that what you already believe is absolutely true, and, they say, if it turns out to be false, you never believed it, and they never said it. (The number of pundits who advocated the Iraq invasion and then claimed they were opposed to it all along is stunning. Trump’s claiming he never supported the invasion fits perfectly what with Philip Tetlock says about people who believe in their own expertise.)

And that you have been and always be right is a lovely, comforting, pleasurable message to consume. It is the delicate whipped cream of citizenship—that you, and people like you, are always right, and never wrong and you can just rely on your gut judgment. Of course, the same media that says it’s all clear has insisted that something is absolutely true that turned out not to be (Saddam Hussein has weapons of mass destruction, voting for Reagan will lead to the people’s revolution, Trump will jail Clinton, Brad Pitt is getting back together with Angelina Jolie, studies show that vaccines cause autism, the world will end in 1987). The paradox is that people continue to consume and believe media who have been wrong over and over, and yet are accepted as trusted authorities because they have sometimes been right, or, more often, because, even if wrong, what they say is comforting and assuring.

But, what happens when media say that Trump has a plan to end ISIS and then it turns out his plan is to tell the Pentagon to come up with a plan? What happens when the study that people cite to say autism is caused by vaccines turns out to be fake? Or, as Leon Festinger famously studied, what happens when a religion says the world will end, and it doesn’t? What happens when something you believe that fits with everything else you believe and is endorsed by authorities you believe turns out to be false? You could decide that maybe things aren’t simple choices between obviously true and obviously false, but that isn’t generally what people do. Instead, we recommit to the media because now we don’t want to look stupid.

Maybe it would be better if we all just decided that complicated issues are complicated, and that’s okay.

There are famous examples that show the simple truth test—you can just trust your perception—is wrong.

For instance, there is this example.

 

If you’re looking at paint swatches, and you want a darker color, you can look at two colors and decide which is darker. You might be wrong. Here’s a famous example of our tendency to interpret color by context.

Those examples look like special cases, and they (sort of) are: if you know that you have a dark grey car, and there is a grey and dark grey car in the parking lot, you don’t stand in the parking lot paralyzed by not knowing which car is yours because you saw something on the internet that showed your perception of darkness might be wrong. That experiment shows you might be entirely wrong, but you will not go on in your life worrying about it.

But you have been wrong about colors. And we’ve all tried to get into the wrong car, but in those cases we get instant feedback that we were wrong. With politics it’s more complicated, since media that promoted what turns out to have been a disastrous decision can insist they never promoted it (when Y2K turned out not to be a thing, various radio stations that had been fear mongering about it just never mentioned it again), claim it was the right decision, or blame it on someone else. They can continue to insist that their “truth” is always the absolutely obvious decision and that there is binary between being certain and being clueless. But, in fact, our operative truth test in the normal daily decisions we make is one that involves skepticism and probability. Sensible people don’t go through life with a yes/no binary. We operate on the basis of a yes/various degrees of maybe/no continuum.

What’s important about optical illusions is that they show that the notion central to a lot of argutainment—that our truth tests for politics should involve being absolutely certain that our group is right or else you’re in the muck of relativistic postmodernism—isn’t how we get through our days. And that’s important. Any medium, any pundit, any program, that says that decisions are always between us and them is lying to us. We know, from decisions about where to park, what stylist to use, what to make for dinner, how to get home, that it isn’t about us vs. them: it’s about making the best guesses we can. And we’re always wrong eventually, and that’s okay.

We tend to rely on what social psychologists call heuristics—meaning mental short cuts—because you can’t thoroughly and completely think through every decision. For instance, if you need a haircut, you can’t possibly thoroughly investigate every single option you have. You’re likely to have method for reducing the uncertainty of the decision—you rely on reviews, you go where a friend goes, you just pick the closest place. If a stylist says you have to shave your head or do nothing, you’ll walk away.

You might tend to have the same thing for breakfast, or generally take the same route to work, campus, the gym. Your route will not be the best choice some percentage of the time because traffic, accidents, or some random event will make your normal route slower than others from time to time (if you live in Austin, it will be wrong a lot). Even though you know that you can’t be certain you’re taking the best route to your destination, you don’t stand in your apartment doorway paralyzed by indecision. You aren’t clueless about your choices—you have a lot of information about what tends to work, and what conditions (weather, a football game, time of day, local music festivals, roadwork) are likely to introduce variables in your understanding of what is the best route. You are neither certain nor clueless.

And there are dozens of other decisions we make every day that are in that realm of neither clueless nor certain: whether you’ll like this movie, if the next episode of a TV program/date/game version/book in a series/cd by an artist/meal at a restaurant will be as good as the last, whether your boss/teacher will like this paper/presentation as much as the previous, if you’ll enjoy this trip, if this shirt will work out, if this chainsaw will really be that much better, if this mechanic will do a good job on your car, if this landlord will not be a jerk, if this class/job will be a good one.

We all spend all of our time in a world in which we must manage uncertainty and ambiguity, but some people get anxious when presented with ambiguity and uncertainty, and so they talk (and think) as so there is an absolute binary between certain and clueless, and every single decision falls into one or the other.

And here things get complicated. The people who don’t like uncertainty and ambiguity (they are, as social psychologists say, “drawn to closure”) will insist that everything is this or that, black or white even though, in fact, they continually manage shades of grey. They get in the car or walk to the bus feeling certain that they have made the right choice, when their choice is just habit, or the best guess, or somewhere on that range of more or less ambiguous.

So, there is a confusion between certainty as a feeling (you feel certain that you are right) and certainty as a reasonable assessment of the evidence (all of the relevant evidence has been assessed and alternative explanations disproven)—as a statement about the process of decision-making. Most people use it in the former way, but think they’re using it in the latter, as though the feeling of certainty is correlated to the quality of evidence. In fact, how certain people feel is largely a consequence of their personality type (On Being Certain has a great explanation of that, but Tetlock’s Expert Political Judgment is also useful). There’s also good evidence that the people who know the most about a subject tend to express themselves with less certainty than people who are un- or misinformed (the “Dunning-Kruger effect”).

What all that means is that people who get anxious in the face of ambiguity and uncertainty resolve that anxiety by feeling certain, and using a rigid truth test. So, the world isn’t rigidly black or white, but their truth test is. For instance, it might have been ambiguous whether they actually took the best route to work, but they will insist that they did, and that they obviously did. They managed uncertainty and ambiguity by denying it exists. This sort of person will get actively angry if you try to show them the situation is complicated.

They manage the actual uncertainty of situations by, retroactively, saying that the right answer was absolutely clear.[1] That sort of person will say that “truth test” is just simply asking yourself if something is true or not. Let’s call that the simple truth test, and the people who use it simple truth testers.

The simple truth test has two parts: first, does this claim fit with what I already believe? and, second, do authorities I consider reliable promote this claim?

People who rely on this simple truth test say it works because, they believe, the true course of action is always absolutely clear, and, therefore, it should be obvious to them, and it should be obvious to people they consider good. (It shouldn’t be surprising that they deny having made mistakes in the past, simply refashioning their own history of decisions—try to find someone who supported the Iraq invasion or was panicked about Y2K.)

The simple truth test is comfortable. Each new claim is assessed in terms of whether it makes us feel good about things we already believe. Every time we reject or accept a claim on the basis of whether it confirms our previous beliefs it confirms our sense of ourselves as people who easily and immediately perceive the truth. Thus, this truth test isn’t just about whether the new claim is true, but about whether they and people like them are certainly right.

The more certain we feel about a claim, the less likely we are to doublecheck whether we were right, and the more likely we are to find ways to make ourselves have been right. Once we get to work, or the gym, or campus, we don’t generally try to figure out whether we really did take the fastest route unless we have reason to believe we might have been mistaken and we’re the sort of person will to consider that we might have been mistaken.

There’s a circle here, in other words: the sort of person who believes that there is a binary between being certain and being clueless, and who is certain about all of her beliefs, is less likely to do the kind of work that would cause her to reconsider her sense of self and her truth tests. Her sense of herself as always right appears to be confirmed because she can’t think of any time she has been wrong. Because she never looked for such a time.

Here I need to make an important clarification: I’m not claiming there is a binary between people who believe you’re either certain or clueless and people who believe that mistakes in perception happen frequently. It’s more of a continuum, but a pretty messy one. We’re all drawn to black or white thinking when we’re stressed, frightened, threatened, or trying to make decisions with inadequate information. Most people have some realms or sets of claims they think are certain (this world is not a dream, evolution is a fact, gravity happens). Some people need to feel certain about everything, and some people don’t need to feel certain much at all, and a lot of people feel certain about many things but not everything.

Someone who believes that her truth tests enable certainty on all or most things will be at one end of the continuum, and someone who managed to live in a constant state of uncertainty would be at the other. Let’s call the person at the “it’s easy to be certain about almost everything important” authoritarian (I’ll explain the connection better later).

Authoritarians have trouble with the concept of probabilities. For instance, if the weather report says there will be rain, that’s a yes/no. And it’s proven wrong if the weather report says yes and there is no rain. But if the weather report says there is a 90% chance of rain and it doesn’t rain, the report has not been proven wrong.

Authoritarians believe that saying there is a 90% chance is just a skeezy way to avoid making a decision—that the world really is divided into yes or no, and some people just don’t want to commit. And they consume media that says exactly that.

This is another really important point: many people spend their consuming media that says that every decision is divided into two categories: the obviously right decision, and the obviously wrong one. And that media says that anyone who says that the right decision might be ambiguous, unclear, or a compromise is promoting relativism or postmodernism. So, as those media say, you’re either absolutely clear or you’re deep in the muck of clueless relativism. Authoritarians who consume that media are like the example above of the woman who believes that her certainty is always justified because she never checks to see whether she was wrong. They live in a world in which their “us” is always right, has always been right, and will always be right, and the people who disagree are wrong-headed ditherers who pretend that it’s complicated because they aren’t man enough to just take a damn stand.

(And, before I go on, I should say that, yes, authoritarianism isn’t limited to one political position—there are authoritarians all over the map. But, that isn’t to say that “both sides are just as bad” or authoritarianism is equally distributed. The distribution of authoritarianism is neither a binary nor a constant; it isn’t all on one side, but it isn’t evenly distributed.)

I want to emphasize that the authoritarian view—that you’re certain or clueless—is often connected to a claim that people are either authoritarians or relativists (or postmodernists or hippies) because there are two odd things about that insistence. First, a point I can’t pursue here, authoritarians rarely stick to principles across situations and end up fitting their own definition of relativist/postmodern. (Briefly, what I mean is that authoritarians put their group first, and say their group is always right, so they condemn behavior in them that they praise or justify in us. In other words, whether an act is good or bad is relative to whether it’s done by us or them—that’s moral relativism. So, oddly enough, you end up with moral relativism attacked by people who engage in it.) Second, even authoritarians actually make decisions in a world of uncertainty and ambiguity, and don’t use the same truth test for all situations. When their us turns out to be wrong, then they will claim the situation was ambiguous, there was bad information, everyone makes mistakes, and go on to insist that all decisions are unambiguous.

So, authoritarians say that all decisions are clear, except when they aren’t, and that we are always right, except when we aren’t. But those unclear situations and mistakes should never be taken as reasons to be more skeptical in the future.

 

  1. Back to Hitler

Okay, so how do most people decide whether their leader is like Hitler? (And notice that it is never about whether our leader is like Hitler.) If you believe in the simple two-part truth test, then you ask yourself whether their leader seems to you to be like Hitler, and whether authorities you trust say he is. And you’re done.

But what does it mean to be like Hitler? What was Hitler like?

There is the historical Hitler who was, I think, evil, but didn’t appear so to many people, and who had tremendous support from a lot of authoritarians, and there is the cartoon Hitler. Hitler was evil because he tried to exterminate entire peoples (and he started an unnecessary war, but that’s often left out). The cartoon version assumes that his ultimate goals were obvious to everyone from the beginning—that he came on the scene saying “Let’s try to conquer the entire world and exterminate icky people” and always stuck to that message, so that everyone who supported him knew they were supporting someone who would start a world war and engage in genocide.

But that isn’t how Hitler looked to people at the time. Hitler didn’t come across as evil, even to his opponents (except to the international socialists), until the Holocaust was well under way. Had he come across as evil he would never have gotten into power. While Mein Kampf and his “beerhall” speeches were clearly eliminationist and warmongering, once he took power his recorded and broadcasted speeches never mentioned extermination and were about peace. (According to Letters to Hitler, his supporters were unhappy when he started the war.) Hitler had a lot of support, of various kinds, and his actions between 1933 and 1939 actually won over a lot of people, especially conservatives and various kinds of nationalists, who had been skeptical or even hostile to him before 1933. His supporters ranged from the fans (the true believers), through conservative nationalists who wanted to stop Bolshevism and reinstate what they saw as “traditional” values, conservative Christians who objected to some of his policies but also liked a lot of them (such as his promotion of traditional roles for women, his opposition to abortion and birth control, his demonizing of homosexuality), and people of various political ideologies who liked that (they thought) he was making Germany respected again, had improved the economy, had ended the bickering and instability they associated with democratic deliberation, and was undoing a lot of the shame associated with the Versailles Treaty.

Until 1939, to his fans, Hitler came across as a truth-teller, willing to say politically incorrect things (that “everyone” knew were true), cut through all the bullshit, and be decisive. He would bring honor back to Germany and make it the military powerhouse it had been in recent memory; he would sideline the feckless and dithering liberals, crush the communists, and deal with the internal terrorism of the large number of immigrants in Germany who were stealing jobs, living off the state, and trying to destroy Germany from within; he would clean out the government of corrupt industrialists and financiers who were benefitting from the too-long deliberations and innumerable regulations. He would be a strong leader who would take action and not just argue and compromise like everyone else. He didn’t begin by imprisoning Jews; he began by making Germany a one-party state, and that involved jailing his political opponents.

Even to many people willing to work with him, Hitler came across as crude, as someone pandering to popular racism and xenophobia, a rabble-rouser who made absurd claims, and who didn’t always make sense, whose understanding of the complexities of politics appeared minimal. But conservatives thought he would enable them to put together a coalition that would dominate the Reichstag (the German Congress, essentially) and they could thereby get through their policy agenda. They thought they could handle him. While they granted that he had some pretty racist and extreme things (especially his hostility to immigrants and non-Christians, although his own record on Christian behavior wasn’t exactly great), they thought that was rabble-rousing he didn’t mean, a rhetoric he could continue to use to mobilize his base for their purposes, or that he could be their pitbull whom they could keep on a short chain. He instantly imposed a politically conservative social agenda that made a lot of conservative Christians very happy—he was relentless in his support for the notion that men earn money and women work in the home, homosexuality and abortion are evil [2], sexual immorality weakens the state, and his rhetoric was always framed in “Christian terms” (as Kenneth Burke famously argued—his rhetoric was a bastardization of Christian rhetoric, but it still relied on Christian tropes).

Conservative Christians (Christians in general, to be blunt) had a complicated reaction to him. Most Christian churches of the era were anti-Semitic, and that took various forms. There were the extreme forms—the passion plays that showed Jews as Christ-killers, who killed Christians for their blood at Passover, even religious festivals about how Jews stabbed consecrated hosts (some of which only ended in the 1960s).

There were also the “I’m not racist but” versions of Christian anti-Semitism promoted by Catholic and Protestant organizations (all of this is elegantly described in Antisemitism, Christian Ambivalence, and the Holocaust). Mainstream Catholic and Lutheran thought promoted the notion that Jews were, at best, failed Christians, and that the only reason not to exterminate them was so that they could be converted. There was, in that world, no explicit repudiation of the sometimes pornographic fantasies of greedy Jews involved in worldwide conspiracies, stabbing the host, drinking the blood of Christian boys at Passover, and plotting the downfall of Germany. And there was certainly no sense that Christians should tolerate Jews in the sense of treating them as we would want to be treated; it simply meant that they shouldn’t be killed. As Ian Kershaw has shown, a lot of German Christians didn’t bother themselves about oppression (even killing) of Jews, as long at it happened out of their ken; they weren’t in favor of killing Jews, but, as long as they could ignore it was happening, they weren’t going to do much to protest (Hitler, The Germans, and the Final Solution).

Many of his skeptics (even international ones) were won over by his rhetoric. His broadcast speeches emphasized his desire for peace and prosperity; they liked that he talked tough about Germany’s relations to other countries (but didn’t think he’d lead them into war), they loved that he spent so much of his own money doing good things for the country (in fact, he got far more money out of Germany than he put into it, and he didn’t pay taxes—for more on this, see Hitler at Home), and they loved that he had the common touch, and didn’t seem to be some inaccessible snob or aristocrat, but a person who really understood them (Letters to Hitler is fascinating for showing his support). They believed that he would take a strong stance, be decisive, look out for regular people, clear the government of corrupt relationships with financiers, silence the kind of people who were trying to drag the nation down, and cleanse the nation of that religious/racial group that was essentially ideologically committed to destroying Germany.

There were a lot of people who thought Hitler could be controlled and used by conservative forces (Van Papen) or was a joke. In middle school, I had a teacher who had been in the Berlin intelligentsia before and during the war, and when asked why people like her didn’t do more about Hitler, she said, “We thought he was a fool.” Many of his opponents thought he would never get elected, never be given a position of power.

But still, some students say, you can see in his early rhetoric that there was a logic of extermination. And, yes, I think that’s true, but, and this is important, what makes you think you would see it? Smart people at the time didn’t see it, especially since, once he got a certain level of attention he only engaged in dog whistle racism. Look, for instance, at Triumph of the Will—the brilliant film of the 1934 Nazi rally in Nuremburg—in which anti-Semitism appears absent. The award-winning movie convinced many that Hitler wasn’t really as anti-Semitic as Mein Kampf might have suggested. But, by 1934, true believers had learned their whistles—everything about bathing, cleansing, purity, and health was a long blow on the dog whistle of “Jews are a disease on the body politic.” Hitler’s first speech on the dissolution of the Reichstag (March 1933) never uses the word Jew, and looked reasonable (he couldn’t control himself, however, and went back to his non-dog whistle demagoguery in what amounted to the question and answer period—Kershaw’s Hubris describes the whole event).

We focus on Hitler’s policy of extermination, but we don’t always focus enough on his foreign policy, especially between 1933 and 1939. Just as we think of Hitler as a raging antisemite (because of his actions), so we think of him as a warmonger, and he was both at heart and eventually, but he managed not to look that way for years. That’s really, really important to remember. He took power in 1933, and didn’t show his warmongering card till 1939. He didn’t show his exterminationist card till even later.

Hitler’s foreign policy was initially tremendously popular because he insisted that Germany was being ill-treated by other nations, was carrying a disproportionate burden, and was entitled to things it was being denied. Hitler said that Germany needed to be strong, more nationalist, more dominating, more manly in its relations with other nations. Germany didn’t want war, but it would, he said, insist upon respect.

Prior to being handed power, Hitler talked like an irresponsible war-monger and raging antisemite (especially in Mein Kampf), but his speeches right up until the invasion of Poland were about peace, stability, and domestic issues about helping the common working man. Even in 1933-4, the Nazi Party could release a pamphlet with his speeches and the title Germany Desires Work and Peace.

What that means is that from 1933 to 1939 Hitler managed a neat rhetorical trick, and he did it by dog whistles: he persuaded his extremist supporters that he was still the warmongering raging antisemite they had loved in the beerhalls and for whom Streicher was a reliable spokesman, and he persuaded the people frightened by his extremism that he wasn’t that guy, he would enable them to get through their policy agenda. (His March 1933 speech is a perfect example of this nasty strategy, and some day I intend to write a long close analysis of it.)

And even many of the conservatives who were initially deeply opposed to him came around because he really did seem to be effective at getting real results. He got those results by mortgaging the German economy, and setting up both a foreign policy and economic policy that couldn’t possibly be maintained without massive conquest; it had short-term benefits, but was not sustainable.

Hitler benefitted by the culture of demagoguery of Weimar Germany. After Germany lost WWI, the monarchy was ended, and a democracy was imposed. Imposing democracy is always vexed, and it doesn’t always work because democracy depends on certain cultural values (a different post). One of those values is seeing pluralism—that is, diversity of perspective, experience, and identity—as a good thing. If you value pluralism, then you’ll tend to value compromise. If you believe that a strong community has people with different legitimate interests, points of view, and beliefs, then you will see compromise as a success. If, however, you’re an authoritarian, and you believe that you and only you have the obvious truth and everyone else is either a knave or a fool, then you will see refusing to compromise as a virtue.

And then democracy stalls. It doesn’t stall because it’s a flawed system; it stalls when people reject the basic premises of democracy, when, despite how they make decisions about how to get to work in the morning, or whether to take an umbrella, they insist that all decisions are binaries between what is obviously right (us) and what is obviously wrong (them).

And, in the era after WWI, Germany was a country with a democratic constitution but a rabidly factionalized set of informational caves. People could (and did) spend all their time getting information from media that said that all political questions are questions of good (us) and evil (them). Those media promoted conspiracy theories—the Protocols of the Elders of Zion, for instance—insisted on the factuality of non-events, framed all issues as apocalyptic, and demonized compromise and deliberating. They said it’s a binary. The International Socialists said the same thing, that anything other than a workers’ revolution now was fascism, that the collapse of democracy was great because it would enable the revolution. Monarchists wanted the collapse of the democracy because they hoped to get a monarchy back, and a non-trivial number of industrialists wanted democracy to collapse because they were afraid people would vote for a social safety net that would raise their taxes.

It was a culture of demagoguery.

But, in the moment, large numbers of people didn’t see it that way because, if you were in a factional cave, and you used the two-step test, everything you heard in your cave would seem to be true. Everything you heard about Hitler would fit with what you already believed, and it was being repeated by people you trusted.

Maybe what you heard confirmed that he would save Germany, that he was a no-bullshit decisive leader who really cared about people like you and was going to get shit done, or maybe what you heard was that he was a tool of the capitalists and liberals and that you should refuse to compromise with them to keep him out of power. Whether what you heard was that Hitler was awesome or that he was completely wrong, what you heard was that he was obviously one or the other, and that anyone who disagreed with you was evil. What you heard was the disagreement itself was proof that evil was present. And heard democracy was a failure.

And that helped Hitler, even the attacks on him . As long as everyone agreed that the truth is obvious, that disagreement is a sign of weakness, the compromise is evil, then an authoritarian like Hitler would come along and win.

There were a lot of people who more or less supported the aims he said he had—getting Germany to have a more prosperous economy, fighting Bolshevism, supporting the German church, avoiding war, renegotiating the Versailles Treaty, purifying Germany of anti-German elements, making German politics more efficient and stable—but who thought Hitler was a loose cannon and a demagogue. Many of those were conservatives and centrists.

And, once Hitler was in power they watched him carefully. And, really, all his public speeches, especially any ones that might get international coverage, weren’t that bad. They weren’t as bad as his earlier rhetoric. There wasn’t as much explicit anti-Semitism, for instance, and, unlike in Mein Kampf, he didn’t advocate aggressive war. He said, over and over, he wanted peace. He immediately took over the press, but, still and all, every reader of his propaganda could believe that Hitler was a tremendously effective leader, and, really, by any standard he was: he effected change.

There wasn’t, however, much deliberation as to whether the changes he effected were good. He took a more aggressive stance toward other countries (a welcome change from the loser stance adopted from the end of WWI, which, technically, Germany did lose), he openly violated the deliberately shaming aspects of the Versailles Treaty, he appeared to reject the new terms of the capitalism of the era (he met with major industrial leaders and claimed to have reached agreements that would help workers), he reduced disagreement, he imprisoned people who seemed to many people to be dangerous, he enacted laws that promoted the cultural “us” and disenfranchised “them.” And he said all the right things. At the end of his first year, Germany published a pamphlet of his speeches, with the title “The New Germany Desires Work and Peace.” So, by the simple two-art truth test (do the claims support what you already believe? do authorities you trust confirm these claims?) Hitler’s rhetoric would look good to a normal person in the 30s. Granted, his rhetoric was always authoritarian—disagreement is bad, pluralism is bad, the right course of action is always obvious to a person of good judgment, you should just trust Hitler—but it would have looked pretty good through the 30s. A person using that third test—can I find evidence to support these claims—would have felt that Hitler was pretty good.

 

III. So, would you recognize Hitler if you liked what he was saying?

What I’m trying to say is that asking the question of “Is their political leader just like Hitler” is just about as wrong as it can get as long as you’re relying on simple truth tests.

If you get all your information from sources you trust, and you trust them because what they say fits in with your other beliefs, then you’re living in a world of propaganda.

If you think that you could tell if you were following a Hitler because you’d know he was evil, and you are in an informational cave that says all the issues are simple, good and evil are binaries and easy to tell one from another, there is either certainty or dithering, disagreement and deliberation are what weak people do, compromise is weakening the good, and the truth in any situation is obvious, then, congratulations, you’d support Hitler! Would you support the guy who turned out to start a disastrous war, bankrupt his nation, commit genocide? Maybe—it would just be random chance. Maybe you would have supported Stalin instead. But you would definitely have supported one or the other.

Democracy isn’t about what you believe; it’s about how you believe. Democracy thrives when people believe that they might be wrong, that the world is complicated, that the best policies are compromises, that disagreement can be passionate, nasty, vehement, and compassionate–that the best deliberation comes when people learn to perspective shift. Democracy requires that we lose gracefully, and it requires, above all else, that we don’t assess policies purely on whether they benefit people like us, but that we think about fairness across groups. It requires that we do unto others as we would have them do unto us, that we pass no policy that we would consider unfair if we were in all the possible subject positions of the policy. Democracy requires imagining that we are wrong.

 

 

 

[1] That sort of person often ascribes to the “just world model” or “just world hypothesis” which is the assumption that we are all rewarded in this world for our efforts. If something bad happens to you, you deserved it. People who claim that is Scriptural will cherry-pick quotes from Proverbs, ignoring what Jesus said about rewards in this world, as well as various other important parts of Scripture (Ecclesiastes, Job, Paul).

 

[2] There is a meme circulating that Hitler was pro-abortion. His public stance was opposition to abortion at least through the thirties. Once the genocides were in full swing, Nazism supported abortion for “lesser races.”

Terrorist Peanuts and Immigration

When I teach about the Holocaust, one of the first questions students ask is: why didn’t the Jews leave? The answer is complicated, but one part isn’t: where would they go? Countries like the US had such restrictive immigration quotas for the parts of Europe from which the Jews were likely to come that we infamously turned back ships. And, so, students ask, why did we do that?

We did it because of that era’s version of the peanut argument.

The peanut argument (more recently presented with a candy brand name attached to it, but among neo-Nazis the analogy used is a bowl of peanuts) has been shared by many, including by members of our administration, as a mic-drop strong defense of a travel ban on people from regions and of religions considered dangerous because, as the analogy goes, would you eat from a bowl of peanuts if you knew that one was poisoned?

People who make that argument insist that they are not being racist, because their objection is, they say, not based in an irrational stereotype about this group. They say it is a rational reaction to what members of this group have really done. And, they say, for the same reason, that they are not being hypocritical: as descendants of immigrants, they are open to safe immigrant groups. These immigrants, unlike their forbears, have dangerous elements.

What they don’t know is that every ethnicity and religion that has come to America has had members that struck large numbers of existing citizens as dangerous—the peanut argument has always been around. And it’s exactly the argument that was used for sending Jews back to death. The tragedies of the US immigration policy during Nazi extermination were the consequence of the 1924 Immigration Act, a bill that set race-based immigration quotas grounded in arguments that this set of immigrants (at that point, Italians and eastern and central Europeans) was too fundamentally and dangerously antagonistic to American traditions and institutions to admit. Architects of that act (and defenders of maintaining the quotas, in the face of people escaping genocide) insisted that they weren’t opposed to immigration, just this set of immigrants.

At least since Letters from an American Farmer (first published in 1782), Americans have taken pride in being a nation of immigrants. And, since around the same time, large numbers of Americans who took pride in being descended from immigrants have stoked fear about this set of immigrants.

Arguments about whether Catholics were a threat to democracy raged throughout the nineteenth century, for instance. Samuel Morse (of the Morse code) wrote a tremendously popular book arguing that German and Irish Catholics were conspiring to overthrow American democracy, which appealed to popular notions about Catholics’ religion being essentially incompatible with democracy. Hostility towards the Japanese and Chinese (grounded in stereotypes that their political and religious beliefs necessarily made them dangerous citizens) resulted in laws prohibiting their naturalization, owning property, repatriation, and, ultimately, their immigration (and, in the case of the Japanese, it led to race-based imprisonment). After the revolutions of 1848, and especially with the rise of violent political movements in the late nineteenth century (anarchism, Sinn Fein, various anti-colonial and independence movements), large numbers of politicians began to focus on the possibility that allowing this group would mean that we were allowing violent terrorists bent on overthrowing our government.

And that’s exactly what it did mean. Every one of those groups did have individuals who advocated violent change.

A large number of the defendants in the Haymarket Trial (concerning a fatal bomb-throwing incident at a rally of anarchists–photo left) were immigrants or children of immigrants; by the early 20th century, people arguing that this group had dangerous individuals could (and did) cite examples like Emma Goldman (a Jewish anarchist imprisoned for inciting to riot), Nicola Sacco and Bartolomeo Vanzetti (Italian anarchists executed murder committed during a robbery), Jacob Abrams and Charles Schenck (Jews convicted of sedition), and Leon Czolgosz (the son of Polish immigrants, who shot McKinley). Even an expert like Harry Laughlin, of the Eugenics Record Office, would testify that the more recent set of immigrants were genetically dangerous (they weren’t—his math was bad).

History has shown that the fear mongerers were wrong. While those groups did all have advocates of violence, and individuals who advocated or committed terrorism, the peanut analogy was fallacious, unjust, and unwise. Those groups also contributed to America, and they were not inherently or essentially un-American.

Looking back, we should have let the people on those ships disembark. Looking forward, we should do the same.

[image: By Internet Archive Book Images – https://www.flickr.com/photos/internetarchivebookimages/14782377875/Source book page: https://archive.org/stream/christianheralds09unse/christianheralds09unse#page/n328/mode/1up, No restrictions, https://commons.wikimedia.org/w/index.php?curid=42730228]

Demagoguery and Democracy

John Muir and environmental demagoguery

One of the most controversial claims I make about demagoguery is that it isn’t necessarily harmful. When I make that argument, it’s common for someone to disagree with me by pointing out that some specific instance of demagoguery is harmful. But that isn’t refuting my argument because I’m not arguing for a binary of demagoguery being always or never harmful. I’m saying that not every instance of demagoguery is necessarily harmful. Whether demagoguery is harmful depends, I think, on where it lies on multiple axes: how demagogic the text is; how powerful that media is that is promoting the demagoguery; how widespread that kind of demagoguery is.

(Yeah, yeah, I know, that means a 3d map, but I honestly think you need all three axes.)

And the best way to talk about the harmless demagoguery is to talk more about one of the first examples of a failed deliberative process that haunted me. One spring, when I was a child, my family went to Yosemite Valley in Yosemite National Park. My family mostly tried (and failed) to teach one another bridge, and I wandered around the emerald valley. Having grown up in semi-arid southern California, the forested walks seemed to me magical, and I was enchanted. One evening, my mother took me to a campfire, hosted by a ranger, who told the story of John Muir, a California environmentalist crucial in the preservation of Yosemite National Park. The last part of the ranger’s talk was about Muir’s final political endeavor, his unsuccessful attempt to prevent the damming and flooding of the Hetch Hetchy Valley, a valley the ranger said was as beautiful as the one by which I had been entranced. The ranger presented the story as a dramatic tragedy of Good (John Muir) versus Evil (the people who wanted to dam and flood the valley), with Evil winning and Muir dying of a broken heart. I was deeply moved, and fascinated. And years later, I would come back to the story when trying to think about whether and how people can argue together on issues with profound disagreement.

The ranger had told the story of Good versus Evil, but that isn’t quite right, in several ways. For one thing, it wasn’t a debate with only two sides (something I have since discovered to be true of most political issues). In this case, it is more accurate to say that there were three sides: the corrupt water company currently supplying San Francisco that wanted to prevent San Francisco getting any publicly-owned water supply; the progressive preservationists like John Muir, who wanted San Francisco to get an outside publicly-owned water supply, but not the Hetch Hetchy; and the progressive conservationists like Gifford Pinchot or Marsden Manson, who wanted an outside publicly-owned water supply that included the Hetch Hetchy.

And a little background on each of the major figures in this issue. Gifford Pinchot was head of the Forest Service, with close political ties to Theodore Roosevelt. Born in 1865, he was a strong advocate of conservation—that is, keeping large parts of land in public ownership, sustainable foresting practices, and what is called “multiple use.” The principle of conservation (as opposed to preservation) is that public lands should be available to as many different uses as possible, such as foresting, hunting, camping, and fishing. The consensus among scholars is that Pinchot’s support for the Hetch Hetchy dam was crucial to its success.

Marsden Manson was far less famous than Pinchot. Born in 1850, he was an engineer (trained at Berkeley), member of the Sierra Club who had camped in Yosemite, and, from 1897 till 1912, was an engineer for the City of San Francisco, first serving on the San Francisco Drainage Committee, then in the Public Works Department, and finally City Engineer. It was in that capacity that he wrote the pamphlet I’ll talk about in a bit. He was an avid conservationist.

John Muir is probably the most famous of the people heavily involved in the controversy, and still a hero among environmentalists. Born in 1838 in Scotland, his family emigrated to the United States when he was around ten, to Wisconsin. He arrived in California in 1868, and promptly went to Yosemite Valley (which was not yet a national park). He stayed there for several years, writing about the Sierras, in what would become articles in popular magazines. His elegant descriptions of the beauties of the Sierra Nevada mountains were influential in persuading people to preserve the area, creating Yosemite National Park. He was the first President of the Sierra Club (formed in the early 1890s) which is still a powerful force in environmentalism. Muir was a preservationist, believing that some public lands should be preserved in as close to a wilderness state as possible.

Perhaps the most important character in the controversy is the Hetch Hetchy Valley. Part of the Yosemite National Park, it was less accessible than Yosemite Valley, and hence far less famous. Like many other valleys in the Sierra Nevada mountains, it was formed by glaciers. Two of its waterfalls are among the tallest waterfalls in North America.

The story the ranger told was between right and wrong, good and evil, and, even though I disagree with the stance Pinchot and Manson took, and believe that the Hetch Hetchy Valley should not have been dammed (and I believe they used some pretty sleazy rhetorical and political tactics to make it happen), I don’t think they were bad people. I don’t think they were selfish or greedy, or even that they didn’t appreciate nature. I think they believed that what they were doing was right, and they had some good arguments and good reasons, and they felt justified in some troubling rhetorical means because they believed their ends were good. I don’t think they were Evil.

After all, San Francisco had long been victimized by a corrupt water company, the Spring Valley Company, with a demonstrated record of exploiting users (particularly during the aftermath of the 1906 earthquake). San Francisco had a legitimate need for a new water supply, and the argument that such public goods should not be subject to the profit motive is a sensible argument. The proponents of the dam argued that turning the valley into a reservoir would increase the public’s access to it, and the ability of the public to benefit. The dam, it was promised, would provide electric power that would be a public utility (that is, not privately owned), thereby benefiting the public directly. Thus, both the preservationists and conservationists were concerned about public good, but they proposed different ways of benefitting the public.

Although John Muir was President and one of the founders of the Sierra Club, not everyone in the organization was certain the dam was a mistake, and so the issue was put to a vote—the Sierra Club at that point had both conservationists and preservationists. Muir wrote the case against, a pamphlet called “The Hetch Hetchy Valley,” which, along with Manson’s argument, “Statements of San Francisco’s Side of the Hetch Hetchy Reservoir Matter,” was distributed to members of the Sierra Club, and they were asked to vote.

For Muir’s pamphlet, he reused much of an 1873 article about Hetch Hetchy, originally written to persuade people to visit the Sierras. He kept much (but not all) of his highly poetical description of the Hetch Hetchy Valley, especially its two falls. His argument throughout the pamphlet is that the valley is beautiful, unique and sacred; it isn’t until the end of the pamphlet that he added a section specifically written for the dam controversy, and in that part he resorted to demagoguery, painting his opponents as motivated by greed and an active desire to destroy beauty, in the same category as the Merchants in the Temple of Jerusalem and Satan in the Garden of Eden: “despoiling gainseekers, — mischief-makers of every degree from Satan to supervisors, lumbermen, cattlemen, farmers, etc., eagerly trying to make everything dollarable […] Thus long ago a lot of enterprising merchants made part of the Jerusalem temple into a place of business instead of a place of prayer, changing money, buying and selling cattle and sheep and doves. And earlier still, the Lord’s garden in Eden, and the first forest reservation, including only one tree, was spoiled.” Muir presented the conflict as “part of the universal battle between right and wrong,” and characterized his opponents’ arguments as “curiously like those of the devil devised for the destruction of the first garden — so much of the very best Eden fruit going to waste; so much of the best Tuolumne water.” Muir called his opponents “Temple destroyers, devotees of ravaging commercialism,” saying, they “seem to have a perfect contempt for Nature, and, instead of lifting their eyes to the mountains, lift them to dams and town skyscrapers.” And he ended the pamphlet with the rousing peroration:

Dam Hetch-Hetchy! As well dam for water-tanks the people’s cathedrals and churches, for no holier temple has ever been consecrated by the heart of man. (John Muir Sierra Club Bulletin, Vol. VI, No. 4, January, 1908)

Muir’s argument is demagoguery—he takes a complicated situation (with at least three different positions) and divides it into a binary of good versus evil people. The bad people don’t have arguments; they have bad motives.

But this, too, is a controversial claim on my part, and it actually makes some people really angry with me for me to “criticize” Muir. The common response is that I shouldn’t criticize him because he was a good man and he was fighting for a good cause. In other words, the world is divided into good and bad people, and we shouldn’t criticize good people on our side. And I reject every part of that argument. I think we should criticize people on our side, especially if we agree with their ends (and especially if we’re looking critically at an argument in the past) because that’s how we learn to make better arguments. And I’m not even criticizing Muir in the sense those people mean—they mean I’m saying negative things about him, and that I believe he should have done things differently. The assumption is that demagoguery is bad, so by saying he engaged in demagoguery he’s a bad person.

Like Muir’s argument, that presumes a binary (or even continuum) between good and bad people. Whether there really is such a binary I don’t know, but I’m certain that it isn’t relevant. The debate wasn’t split into good and bad people, and we don’t have to make our heroes untouchable.

And, besides, I’m not criticizing Muir in the sense of saying he did the wrong thing. I’m not sure he did. His demagoguery had no particular harm. While his text (especially the last part) is demagoguery, and he was a powerful rhetor at the time, the kind of demagoguery in which he was engaged (against conservationists) wasn’t very widespread, so he wasn’t contributing to a broad cultural demonizing of some group. And I’m not even sure that his demagoguery did any harm (or benefit) to the effectiveness of his argument.

Muir was trying to get the majority of people in the Sierra Club—perhaps even all of them—to condemn the Hetch Hetchy scheme on preservationist grounds, so he already had the votes of preservationists like himself. What he had to do rhetorically is to move conservationists (or, at least, people drawn to that position) over to the preservationist side, at least in regard to the Hetch Hetchy Valley.

A useful step in an argument is identifying what, exactly, is the issue (or are the issues): why are we disagreeing? Called the “stasis” in classical rhetorical theory, the “hinge” of an argument points to the paradox that a productive disagreement requires agreement on several points—including on the geography of the argument: what is at the center, how broad an area can/should the argument cover, what areas are out of bounds? The stasis is the main issue in the argument, and arguments often go wrong because people disagree about what it is. In the case of the Hetch Hetchy, an ideal argument about the topic would be about whether damming and flooding that valley was the best long-term option for everyone who uses the valley—such a debate would require that people talk honestly and accurately about the actual costs, the various options, and as usefully as possible about the benefits (of all sorts) to be had from preserving the valley for camping (this is a big issue in California, in which camping is very popular).

It’s conventional in rhetoric to say that you have to argue from your opposition’s premises to persuade your opposition, and that would have necessitated Muir arguing on the premises that informed conservation.

Muir’s rhetorical options included:

  1. condemning conservationism in the abstract, and trying to persuade his conservationist audience to abandon an important value;
  2. arguing that conservationism is not a useful value in this particular case, and that this is a time when preservationism is a better route;
  3. arguing that damming and flooding the valley does not really enact conservationist values (e.g., it’s actually expensive).

But, to do any of those strategies effectively, he’d have to make the case on the conservationist premise that it’s appropriate to think about natural resources in terms of costs and benefits. And Muir’s stance about nature—his whole career—was grounded in the perception that such a way of looking at nature is a unethical.

Muir paraphrases (in quotes) the conservationist mantra: “Utilization of beneficent natural resources, that man and beast may be fed and the dear Nation grow great.” While I’ve never found any conservationist text that has that precise wording, it’s a fair representation of the basic principle of conservation; i.e., “greatest good for the greatest number.” And, certainly, conservationists did (and do) believe that there is no point in preserving any wilderness areas—all forests should be harvested, all lakes should be used, all areas should be open to hunting. But they didn’t do this out of a desire for financial gain, as much as from a different (and I would say wrong-headed) perception of how to define “the public.”

The conservationist argument in this case was pretty much bad faith, in that they claimed that they would improve the beauty of the valley by making it a lake. Muir argued they would destroy it. I agree with Muir, as it happens, and so my argument is not that Muir is factually wrong; the valley was destroyed by the damming. I also think some of the dam proponents—specifically Manson–knew that it would be destroyed, and Manson was lying when he described a road, increased camping, and other features that, as an engineer, he must have known were impossible. But many of the people drawn to the conservationist plan didn’t know that Manson was describing technologically impossible conditions, and they believed the proponents’ argument that the resulting reservoir would not only benefit San Franciscans (by providing safe cheap water and electric power) but it would have no impact on camping; it would, the conservationists claim, increase the accessibility of the area without interfering with the beauty of the valley at all. Again, that isn’t true, but it’s what people believed. And part of Aristotle’s point about rhetoric, and its reliance on the enthymeme, is that rhetoric begins with what people believe.

Manson’s response was fairly straightforward, and grounded, he insisted repeatedly, on facts. He argued:

  • San Francisco owned the valley floor.
  • Construction would not begin on the Hetch Hetchy dam until and unless San Francisco first developed Lake Eleanor (a water source not disputed by the preservationists) and then found that water source inadequate.
  • A photo he presented showed what the lake would look like when dammed and flooded—very little of the valley flooded, with no obstruction of the falls that Muir praised so heavily, a road around the edge enabling visitors to see more of the valley—so, he said, the valley will be more beautiful, reflecting the magnificent granite walls.
  • Keeping the reservoir water pure will not inhibit camping in any way.
  • The Hetch Hetchy plan is the least expensive option, and it will provide energy, thereby breaking the current energy monopoly.

Muir’s arguments, he says, “are not in reality based upon true and correct facts” (435).

Marsden Manson was City Engineer for San Francisco, and had done thorough reports on the issue. And so he had to know that almost all of what he was saying was “not in reality based upon true and correct facts.” San Francisco had bought the land, but, since it was within a national park, the seller had no right to sell it. Construction would begin immediately on the dam, flooding the entire valley, making the entire valley inaccessible, including the famous falls. It was not possible to build the roads that Manson drew on the photo and, being an engineer, he must have known that. The reservoir inhibited camping, and, most important, the Hetch Hetchy plan was the most expensive option available to San Francisco. Manson had muddled the numbers to make it appear less expensive.

In other words, either Manson lied, or he was muddled, uninformed, bad at arithmetic, and not a very good engineer.

Manson’s motives in all this are complicated, and ultimately irrelevant. He may have expected to benefit personally by the approval of the dam project, as he may have thought he would build it. But it would have been a benefit of glory but not money; I’ve never read anything to suggest that he was motivated by anything other than a sense that dominating nature is glorious, and that public projects providing water and power are better than preserving valleys. (He is reputed to have suggested damming and flooding Yosemite Valley.)

In other words, what presented itself as the pragmatic option was just as ideologically driven as what was rejected as the emotional one (I think the same thing happens now with arguments about the death penalty, welfare “reform,” the war on drugs, foreign policy, the deficit—there is a side that manages to be taken as more practical, but it might actually be the most ideologically driven).

Muir’s rhetorical options were limited by his opponent, an engineer, making claims about engineering issues that neither Muir nor his supporters had the expertise to refute. It took years for someone to look at the San Francisco reports and determine that the numbers were bad; preservationists didn’t know (and, presumably, many supporters of the dam didn’t know) that the numbers were misleading, and it was the most expensive option.

But would Muir have argued on such grounds anyway? To argue on the grounds of cost would have confirmed the Major Premise that public projects should be determined by cost—to say that the Hetch Hetchy should not be built because it is the most expensive would seem to confirm the perception that you can make natural cathedrals “dollarable” in Muir’s words. In other words, Muir rejected the very terms by which the conservationist argument was made—he rejected the premises. To argue on premises (except in rare circumstances) seems to confirm them, and so he would, in order to win the Hetch Hetchy argument, have argued against what he had spent a lifetime arguing for: that we should not look at nature in terms of money. Wilderness areas are, he insisted, sacred. And so he railed against his opposition.

As I mentioned above, I’m often attacked by people who think I’m attacking Muir. And I think that misunderstanding arises because of a particular perception of what the discipline of rhetoric is for: rhetorical analysis is often seen as implicitly normative; we do an analysis to say what a person should do or should have done. So, to say that Muir’s rhetorical strategies didn’t work is to say his rhetoric was bad, and it should have been different. Coupled with the notion that good people promote good things, if I say that Muir’s rhetoric was “demagoguery,” then I am saying he cannot have been a good person. There is, here, a theory of identity: that people are either good or bad; that good people say good things, and that bad people say bad things; that demagoguery is something only bad people do. That whole model of discourse and identity is wrong in too many ways to count, and I am not endorsing it.

I think Muir was a good man–he is a personal hero of mine—but that doesn’t mean he was perfect, and it certainly doesn’t mean we can’t learn from him. Muir did well within the Sierra Club (the Sierra Club vote was about 80% on Muir’s side and 20% in favor of the dam) , but ultimately lost the argument. And I think what we learn from his failure to persuade all conservationists to vote against the Hetch Hetchy project is not about Muir’s personal qualities or failings, but about rhetorical constraints and models of persuasion.

I’m arguing that, for Muir to have persuaded his opposition, he would have had to rely on premises that he rejected. This is sometimes called the “sincerity problem” in rhetoric. To what extent, and under what circumstances, should we make arguments we don’t believe in order to achieve an end in which we do believe? Muir didn’t argue from insincere premises; that may have weakened his effectiveness in the moment. But it definitely strengthened his effectiveness in the long run. His Hetch Hetchy pamphlet continues to be powerfully motivating for people, perhaps more motivating than it would have been had he compromised his rhetoric in order to be effective in the short-term. Muir’s demagoguery did no harm, and it may have even done some good. Demagoguery isn’t necessary harmful.

Demagoguery and Democracy

[image source: https://en.wikipedia.org/wiki/Hetch_Hetchy#/media/File:Hetch_Hetchy_Valley.jpg]

 

On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.

 

King Lear and charismatic leadership

Recently, various highly factionalized media worked their audience into a froth by reporting that New York’s “Shakespeare in the Park” had Julius Caesar represented as Trump. That these media were successful shows people are willing to get outraged on the basis of no or mis-information. Shakespeare’s Caesar is neither a villain nor a tyrant.

And it’s the wrong Shakespeare anyway for a Trump comparison. Shakespeare was deeply ambivalent about what we would now consider democratic discourse (look at how quickly Marc Antony turns the crowd, or Coriolanus’ inability to maintain popularity). But he wasn’t ambivalent about leaders who insist on hyperbolic displays of personal loyalty. They are the source of tragedy.

The truly Shakespearean moment recently was Trump’s cabinet meeting, which he seemed to think would gain him popularity with his base, since it was his entire cabinet expressing perfect loyalty to him. And anyone even a little familiar with Shakespeare immediately thought of the scene in King Lear when Lear demands professions of loyalty. Trump isn’t Caesar; he’s Lear.

Lear’s insistence on loyalty meant that he rejected the person who was speaking the truth to him, and the consequence was tragedy. It isn’t exactly news, at least among people familiar with the history of persuasion and leadership, that leaders who surround themselves with people who make the leader feel great (or who worship the leader) make bad decisions. Ian Kershaw’s elegant Fatal Choices makes the point vividly, showing how leaders like Mussolini, Hitler, or Hirohito skidded into increasingly bad decisions because they treated dissent as disloyalty.

In business schools, this kind of leadership is called “charismatic,” and it is often presented as an unequivocal good—something that is surely making Max Weber (who initially described it in 1916) turn in his grave. Weber identified three sources of power for leaders: tradition, legal, and charismatic, and Hannah Arendt (the scholar of totalitarianism) added a fourth: someone whose authority comes from having demonstrated context-specific knowledge. Weber argued that charismatic leadership is the most volatile.

In business schools, charismatic leadership is praised because it motivates followers to go above and beyond; followers who believe in the leader are less likely to resist. And, while that might seem like an unequivocal good, it’s only good if the leader is leading the institution in a good direction. If the direction is bad, then disaster just happens faster.

Charismatic leadership is a relationship that requires complete acquiescence and submission on the part of the followers. It assumes that there is a limited amount of power available (thus, the more power that others have, the less there is for the leader to have). And so the charismatic leader is threatened by others taking leadership roles, pointing out her errors, or having expertise to which she should submit. It is a relationship of pure hierarchy, simultaneously robust and fragile, because it can withstand an extraordinary amount of disconfirming evidence (that the leader is not actually all that good, does not have the requisite traits, is out of her depth, is making bad decisions) by simply rejecting them; it is fragile, however, insofar as the admission of a serious flaw on the part of the leader destroys the relationship entirely. A leader who relies on legitimacy isn’t weakened by disagreement (and might even be strengthened by it), but a charismatic leader is.

Hence, leaders who rely on legitimacy encourage disagreement and dissent because that leader’s authority is strengthened by the expertise, contributions, and criticism of others, but charismatic leaders insist on loyalty.

Charismatic leadership is praised in many areas because it leads to blind loyalty, and blind loyalty certainly does make an organization that has people working feverishly toward the leaders’ ends. But what if those ends aren’t good?

Whether charismatic leadership is the best model for business is more disputed than best sellers on leadership might lead one to believe. There is no dispute, however, that it’s a model of leadership profoundly at odds with a democratic society. It is deeply authoritarian, since the authority of the leader is the basis of decision-making, and dissent is disloyalty.

Lear demanded oaths of blind loyalty, and, as often happens under those circumstances, the person who was committed to the truth wouldn’t take such an oath. And that person was the hero.